Sample records for deterministic geologic processes

  1. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  2. Assessment of effectiveness of geologic isolation systems. Geologic-simulation model for a hypothetical site in the Columbia Plateau. Volume 2: results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, M.G.; Petrie, G.M.; Baldwin, A.J.

    1982-06-01

    This report contains the input data and computer results for the Geologic Simulation Model. This model is described in detail in the following report: Petrie, G.M., et. al. 1981. Geologic Simulation Model for a Hypothetical Site in the Columbia Plateau, Pacific Northwest Laboratory, Richland, Washington. The Geologic Simulation Model is a quasi-deterministic process-response model which simulates, for a million years into the future, the development of the geologic and hydrologic systems of the ground-water basin containing the Pasco Basin. Effects of natural processes on the ground-water hydrologic system are modeled principally by rate equations. The combined effects and synergistic interactionsmore » of different processes are approximated by linear superposition of their effects during discrete time intervals in a stepwise-integration approach.« less

  3. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  4. Investigations on indoor Radon in Austria, part 2: Geological classes as categorical external drift for spatial modelling of the Radon potential.

    PubMed

    Bossew, Peter; Dubois, Grégoire; Tollefsen, Tore

    2008-01-01

    Geological classes are used to model the deterministic (drift or trend) component of the Radon potential (Friedmann's RP) in Austria. It is shown that the RP can be grouped according to geological classes, but also according to individual geological units belonging to the same class. Geological classes can thus serve as predictors for mean RP within the classes. Variability of the RP within classes or units is interpreted as the stochastic part of the regionalized variable RP; however, there does not seem to exist a smallest unit which would naturally divide the RP into a deterministic and a stochastic part. Rather, this depends on the scale of the geological maps used, down to which size of geological units is used for modelling the trend. In practice, there must be a sufficient number of data points (measurements) distributed as uniformly as possible within one unit to allow reasonable determination of the trend component.

  5. Theory connecting nonlocal sediment transport, earth surface roughness, and the Sadler effect

    NASA Astrophysics Data System (ADS)

    Schumer, Rina; Taloni, Alessandro; Furbish, David Jon

    2017-03-01

    Earth surface evolution, like many natural phenomena typified by fluctuations on a wide range of scales and deterministic smoothing, results in a statistically rough surface. We present theory demonstrating that scaling exponents of topographic and stratigraphic statistics arise from long-time averaging of noisy surface evolution rather than specific landscape evolution processes. This is demonstrated through use of "elastic" Langevin equations that generically describe disturbance from a flat earth surface using a noise term that is smoothed deterministically via sediment transport. When smoothing due to transport is a local process, the geologic record self organizes such that a specific Sadler effect and topographic power spectral density (PSD) emerge. Variations in PSD slope reflect the presence or absence and character of nonlocality of sediment transport. The range of observed stratigraphic Sadler slopes captures the same smoothing feature combined with the presence of long-range spatial correlation in topographic disturbance.

  6. Seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Kaláb, Zdeněk; Šílený, Jan; Lednická, Markéta

    2017-07-01

    This paper deals with the seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel in the Czech Republic. The basic source of data for historical earthquakes up to 1990 was the seismic website [1-]. The most intense earthquake described occurred on September 15, 1590 in the Niederroesterreich region (Austria) in the historical period; its reported intensity is Io = 8-9. The source of the contemporary seismic data for the period since 1991 to the end of 2014 was the website [11]. It may be stated based on the databases and literature review that in the period from 1900, no earthquake exceeding magnitude 5.1 originated in the territory of the Czech Republic. In order to evaluate seismicity and to assess the impact of seismic effects at depths of hypothetical deep geological repository for the next time period, the neo-deterministic method was selected as an extension of the probabilistic method. Each one out of the seven survey areas were assessed by the neo-deterministic evaluation of the seismic wave-field excited by selected individual events and determining the maximum loading. Results of seismological databases studies and neo-deterministic analysis of Čihadlo locality are presented.

  7. Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran

    NASA Astrophysics Data System (ADS)

    Ney, B.; Askari, M.

    2009-04-01

    Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.

  8. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  9. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  10. P2S--Coupled simulation with the Precipitation-Runoff Modeling System (PRMS) and the Stream Temperature Network (SNTemp) Models

    USGS Publications Warehouse

    Markstrom, Steven L.

    2012-01-01

    A software program, called P2S, has been developed which couples the daily stream temperature simulation capabilities of the U.S. Geological Survey Stream Network Temperature model with the watershed hydrology simulation capabilities of the U.S. Geological Survey Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a modular, deterministic, distributed-parameter, physical-process watershed model that simulates hydrologic response to various combinations of climate and land use. Stream Network Temperature was developed to help aquatic biologists and engineers predict the effects of changes that hydrology and energy have on water temperatures. P2S will allow scientists and watershed managers to evaluate the effects of historical climate and projected climate change, landscape evolution, and resource management scenarios on watershed hydrology and in-stream water temperature.

  11. Tracing of paleo-shear zones by self-potential data inversion: case studies from the KTB, Rittsteig, and Grossensees graphite-bearing fault planes

    NASA Astrophysics Data System (ADS)

    Mehanee, Salah A.

    2015-01-01

    This paper describes a new method for tracing paleo-shear zones of the continental crust by self-potential (SP) data inversion. The method falls within the deterministic inversion framework, and it is exclusively applicable for the interpretation of the SP anomalies measured along a profile over sheet-type structures such as conductive thin films of interconnected graphite precipitations formed on shear planes. The inverse method fits a residual SP anomaly by a single thin sheet and recovers the characteristic parameters (depth to the top h, extension in depth a, amplitude coefficient k, and amount and direction of dip θ) of the sheet. This method minimizes an objective functional in the space of the logarithmed and non-logarithmed model parameters (log( h), log( a), log( k), and θ) successively by the steepest descent (SD) and Gauss-Newton (GN) techniques in order to essentially maintain the stability and convergence of this inverse method. Prior to applying the method to real data, its accuracy, convergence, and stability are successfully verified on numerical examples with and without noise. The method is then applied to SP profiles from the German Continental Deep Drilling Program (Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschla - KTB), Rittsteig, and Grossensees sites in Germany for tracing paleo-shear planes coated with graphitic deposits. The comparisons of geologic sections constructed in this paper (based on the proposed deterministic approach) against the existing published interpretations (obtained based on trial-and-error modeling) for the SP data of the KTB and Rittsteig sites have revealed that the deterministic approach suggests some new details that are of some geological significance. The findings of the proposed inverse scheme are supported by available drilling and other geophysical data. Furthermore, the real SP data of the Grossensees site have been interpreted (apparently for the first time ever) by the deterministic inverse scheme from which interpretive geologic cross sections are suggested. The computational efficiency, analysis of the numerical examples investigated, and comparisons of the real data inverted here have demonstrated that the developed deterministic approach is advantageous to the existing interpretation methods, and it is suitable for meaningful interpretation of SP data acquired elsewhere over graphitic occurrences on fault planes.

  12. Evansville Area Earthquake Hazards Mapping Project (EAEHMP) - Progress Report, 2008

    USGS Publications Warehouse

    Boyd, Oliver S.; Haase, Jennifer L.; Moore, David W.

    2009-01-01

    Maps of surficial geology, deterministic and probabilistic seismic hazard, and liquefaction potential index have been prepared by various members of the Evansville Area Earthquake Hazard Mapping Project for seven quadrangles in the Evansville, Indiana, and Henderson, Kentucky, metropolitan areas. The surficial geologic maps feature 23 types of surficial geologic deposits, artificial fill, and undifferentiated bedrock outcrop and include alluvial and lake deposits of the Ohio River valley. Probabilistic and deterministic seismic hazard and liquefaction hazard mapping is made possible by drawing on a wealth of information including surficial geologic maps, water well logs, and in-situ testing profiles using the cone penetration test, standard penetration test, down-hole shear wave velocity tests, and seismic refraction tests. These data were compiled and collected with contributions from the Indiana Geological Survey, Kentucky Geological Survey, Illinois State Geological Survey, United States Geological Survey, and Purdue University. Hazard map products are in progress and are expected to be completed by the end of 2009, with a public roll out in early 2010. Preliminary results suggest that there is a 2 percent probability that peak ground accelerations of about 0.3 g will be exceeded in much of the study area within 50 years, which is similar to the 2002 USGS National Seismic Hazard Maps for a firm rock site value. Accelerations as high as 0.4-0.5 g may be exceeded along the edge of the Ohio River basin. Most of the region outside of the river basin has a low liquefaction potential index (LPI), where the probability that LPI is greater than 5 (that is, there is a high potential for liquefaction) for a M7.7 New Madrid type event is only 20-30 percent. Within the river basin, most of the region has high LPI, where the probability that LPI is greater than 5 for a New Madrid type event is 80-100 percent.

  13. URBAN STORMWATER INVESTIGATIONS BY THE U. S. GEOLOGICAL SURVEY.

    USGS Publications Warehouse

    Jennings, Marshall E.

    1985-01-01

    Urban stormwater hydrology studies in the U. S. Geological Survey are currently focused on compilation of national data bases containing flood-peak and short time-interval rainfall, discharge and water-quality information for urban watersheds. Current data bases, updated annually, are nationwide in scope. Supplementing the national data files are published reports of interpretative analyses, a map report and research products including improved instrumentation and deterministic modeling capabilities. New directions of Survey investigations include gaging programs for very small catchments and for stormwater detention facilities.

  14. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally, the critical rainfall threshold of the slope can be obtained by the coupled analysis of rainfall, infiltration, seepage, and slope stability. Taking the slope located at 50k+650 on Tainan county road No 174 as an example, it located at Zeng-Wun river watershed in the southern Taiwan, is an active landslide due to typhoon events. Coordinates for the case study site are 194925, 2567208 (TWD97). The site was selected as the results of previous reports and geological survey. According to the Central Weather Bureau, the annual precipitation is about 2,450 mm, the highest monthly value is in August with 630 mm, and the lowest value is in November with 13 mm. The results show that the critical rainfall threshold of the study case is around 640 mm. It means that there should be alarmed when the accumulated rainfall over 640 mm. Our preliminary results appear to be useful for rainfall-induced landslide hazard assessments. The findings are also a good reference to establish an early warning system of landslides and develop strategies to prevent so much misfortune from happening in the future.

  15. Fracture Networks from a deterministic physical model as 'forerunners' of Maze Caves

    NASA Astrophysics Data System (ADS)

    Ferer, M. V.; Smith, D. H.; Lace, M. J.

    2013-12-01

    'Fractures are the chief forerunners of caves because they transmit water much more rapidly than intergranular pores.[1] Thus, the cave networks can follow the fracture networks from which the Karst caves formed by a variety of processes. Traditional models of continental Karst define water flow through subsurface geologic formations, slowly dissolving the rock along the pathways (e.g. water saturated with respect to carbon dioxide flowing through fractured carbonate formations). We have developed a deterministic, physical model of fracturing in a model geologic layer of a given thickness, when that layer is strained in one direction and subsequently in a perpendicular direction. It was observed that the connected fracture networks from our model visually resemble maps of maze caves. Since these detailed cave maps offer critical tools in modeling cave development patterns and conduit flow in Karst systems, we were able to test the qualitative resemblance by using statistical analyses to compare our model networks in geologic layers of four different thicknesses with the corresponding statistical analyses of four different maze caves, formed in a variety of geologic settings. The statistical studies performed are: i) standard box-counting to determine if either the caves or the model networks are fractal. We found that both are fractal with a fractal dimension Df ≈ 1.75 . ii) for each section inside a closed path, we determined the area and perimeter-length, enabling a study of the tortuosity of the networks. From the dependence of the section's area upon its perimeter-length, we have found a power-law behavior (for sufficiently large sections) characterized by a 'tortuosity' exponent. These exponents have similar values for both the model networks and the maze caves. The best agreement is between our thickest model layer and the maze-like part of Wind Cave in South Dakota where the data from the model and the cave overlie each other. For the present networks from the physical model, we assumed that the geologic layer was of uniform thickness and that the strain in both directions were the same. The latter may not be the case for the Brazilian, Toca de Boa Cave. These assumptions can be easily modified in our computer code to reflect different geologic histories. Even so the quantitative agreement suggests that our model networks are statistically realistic both for the 'forerunners' of caves and for general fracture networks in geologic layers, which should assist the study of underground fluid flow in many applications for which fracture patterns and fluid flow are difficult to determine (e.g., hydrology, watershed management, oil recovery, carbon dioxide sequestration, etc.). Keywords - Fracture Networks, Karst, Caves, Structurally Variable Pathways, hydrogeological modeling 1 Arthur N. Palmer, CAVE GEOLOGY, pub. Cave Books, Dayton OH, (2007).

  16. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  17. Geology and Hydrology Drive Benthic Fungal Community Structure in a Lowland River System

    NASA Astrophysics Data System (ADS)

    Mansour, I.; Heppell, C. M.; McKew, B.; Dumbrell, A.; Whitby, C. B.; Veresoglou, S.; Leung, G.; Binley, A. M.; Lansdown, K.; Trimmer, M.; Olde, L.; Rillig, M.

    2017-12-01

    Despite their essential roles in ecosystem functioning, exceptionally little is known about fungal communities and the ecological processes regulating their structure. This is particularly true for riverine ecosystems, where almost nothing about the diversity of their fungal communities is known. In this field study, benthic sediment samples and surface water samples were collected seasonally from lowland rivers (Hampshire Avon catchment, UK) underlain by three distinct parent geologies (clay, Greensand and Chalk), across a hydrological gradient of baseflow index ranging from 0.23 to 0.95. Fungal communities were assessed using high-throughput sequencing and community data were analyzed via ordination, variance partitioning and indicator species analysis. We found that distinct fungal communities inhabited the benthic sediments of the differing geologies. Clay sediments were dominated by the yeast Cryptococcus podzolicus, the hyphomycete Pseudeuotium hygrophilum, Mortierella, and unidentified fungi in the class Sordariomycetes - the latter two also common within Greensand sediments along with seasonal spikes in Rhizophydium littoreum, a parasite of green algae. An unidentified fungus from the phylum Ascomycota was numerically dominant at all chalk sites and across all seasons. Spatial variables explained only a negligible proportion of variance between communities, indicating that environmental and biotic processes drive the differences between the observed fungal communities rather than purely spatial mechanisms (e.g. stochastic processes). Season was a highly significant predictor of community structure (p=0.005) and baseflow index explained some of the variance within the fungal community data across seasons. This study demonstrates that deterministic rather than stochastic processes are important for structuring lotic fungal communities, and, for the first time, shows that underlying geology and associated differences in hydrology are drivers of fungal community structure. Since riverine ecosystems are often subject to high levels of natural and anthropogenic stressors, it is imperative to understand the mechanisms regulating riverine fungal communities before appropriate management options can be suggested.

  18. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  19. Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)

    NASA Astrophysics Data System (ADS)

    Askari, M.; Ney, Beh

    2009-04-01

    Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.

  20. Earthquake mechanism and seafloor deformation for tsunami generation

    USGS Publications Warehouse

    Geist, Eric L.; Oglesby, David D.; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Tsunamis are generated in the ocean by rapidly displacing the entire water column over a significant area. The potential energy resulting from this disturbance is balanced with the kinetic energy of the waves during propagation. Only a handful of submarine geologic phenomena can generate tsunamis: large-magnitude earthquakes, large landslides, and volcanic processes. Asteroid and subaerial landslide impacts can generate tsunami waves from above the water. Earthquakes are by far the most common generator of tsunamis. Generally, earthquakes greater than magnitude (M) 6.5–7 can generate tsunamis if they occur beneath an ocean and if they result in predominantly vertical displacement. One of the greatest uncertainties in both deterministic and probabilistic hazard assessments of tsunamis is computing seafloor deformation for earthquakes of a given magnitude.

  1. A framework for assessing the uncertainty in wave energy delivery to targeted subsurface formations

    NASA Astrophysics Data System (ADS)

    Karve, Pranav M.; Kallivokas, Loukas F.; Manuel, Lance

    2016-02-01

    Stress wave stimulation of geological formations has potential applications in petroleum engineering, hydro-geology, and environmental engineering. The stimulation can be applied using wave sources whose spatio-temporal characteristics are designed to focus the emitted wave energy into the target region. Typically, the design process involves numerical simulations of the underlying wave physics, and assumes a perfect knowledge of the material properties and the overall geometry of the geostructure. In practice, however, precise knowledge of the properties of the geological formations is elusive, and quantification of the reliability of a deterministic approach is crucial for evaluating the technical and economical feasibility of the design. In this article, we discuss a methodology that could be used to quantify the uncertainty in the wave energy delivery. We formulate the wave propagation problem for a two-dimensional, layered, isotropic, elastic solid truncated using hybrid perfectly-matched-layers (PMLs), and containing a target elastic or poroelastic inclusion. We define a wave motion metric to quantify the amount of the delivered wave energy. We, then, treat the material properties of the layers as random variables, and perform a first-order uncertainty analysis of the formation to compute the probabilities of failure to achieve threshold values of the motion metric. We illustrate the uncertainty quantification procedure using synthetic data.

  2. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  3. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  4. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  5. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    USGS Publications Warehouse

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods. A geologically based model, such as one using the total petroleum system approach, is preferred in that it combines the elements of petroleum source, reservoir, trap and seal with the tectono-stratigraphic history of basin evolution with petroleum resource potential. Care must be taken to demonstrate that homogeneous populations in terms of geology, geologic risk, exploration, and discovery processes are used in the assessment process. The USGS 2000 method (7th Approximation Model, EMC computational program) is robust; that is, it can be used in both mature and immature areas, and provides comparable results when using different geologic models (e.g. stratigraphic or structural) with differing amounts of subdivisions, assessment units, within the total petroleum system. ?? 2005 International Association for Mathematical Geology.

  6. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  7. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  8. Application of plurigaussian simulation to delineate the layout of alteration domains in Sungun copper deposit

    NASA Astrophysics Data System (ADS)

    Talebi, Hassan; Asghari, Omid; Emery, Xavier

    2013-12-01

    An accurate estimation of mineral grades in ore deposits with heterogeneous spatial variations requires defining geological domains that differentiate the types of mineralogy, alteration and lithology. Deterministic models define the layout of the domains based on the interpretation of the drill holes and do not take into account the uncertainty in areas with fewer data. Plurigaussian simulation (PGS) can be an alternative to generate multiple numerical models of the ore body, with the aim of assessing the uncertainty in the domain boundaries and improving the geological controls in the characterization of quantitative attributes. This study addresses the application of PGS to Sungun porphyry copper deposit (Iran), in order to simulate the layout of four hypogene alteration zones: potassic, phyllic, propylitic and argillic. The aim of this study is to construct numerical models in which the alteration structures reflect the evolution observed in the geology.

  9. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  10. Stochastic Seismic Inversion and Migration for Offshore Site Investigation in the Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Son, J.; Medina-Cetina, Z.

    2017-12-01

    We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.

  11. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  12. The Handling of Hazard Data on a National Scale: A Case Study from the British Geological Survey

    NASA Astrophysics Data System (ADS)

    Royse, Katherine R.

    2011-11-01

    This paper reviews how hazard data and geological map data have been combined by the British Geological Survey (BGS) to produce a set of GIS-based national-scale hazard susceptibility maps for the UK. This work has been carried out over the last 9 years and as such reflects the combined outputs of a large number of researchers at BGS. The paper details the inception of these datasets from the development of the seamless digital geological map in 2001 through to the deterministic 2D hazard models produced today. These datasets currently include landslides, shrink-swell, soluble rocks, compressible and collapsible deposits, groundwater flooding, geological indicators of flooding, radon potential and potentially harmful elements in soil. These models have been created using a combination of expert knowledge (from both within BGS and from outside bodies such as the Health Protection Agency), national databases (which contain data collected over the past 175 years), multi-criteria analysis within geographical information systems and a flexible rule-based approach for each individual geohazard. By using GIS in this way, it has been possible to model the distribution and degree of geohazards across the whole of Britain.

  13. New Inversion and Interpretation of Public-Domain Electromagnetic Survey Data from Selected Areas in Alaska

    NASA Astrophysics Data System (ADS)

    Smith, B. D.; Kass, A.; Saltus, R. W.; Minsley, B. J.; Deszcz-Pan, M.; Bloss, B. R.; Burns, L. E.

    2013-12-01

    Public-domain airborne geophysical surveys (combined electromagnetics and magnetics), mostly collected for and released by the State of Alaska, Division of Geological and Geophysical Surveys (DGGS), are a unique and valuable resource for both geologic interpretation and geophysical methods development. A new joint effort by the US Geological Survey (USGS) and the DGGS aims to add value to these data through the application of novel advanced inversion methods and through innovative and intuitive display of data: maps, profiles, voxel-based models, and displays of estimated inversion quality and confidence. Our goal is to make these data even more valuable for interpretation of geologic frameworks, geotechnical studies, and cryosphere studies, by producing robust estimates of subsurface resistivity that can be used by non-geophysicists. The available datasets, which are available in the public domain, include 39 frequency-domain electromagnetic datasets collected since 1993, and continue to grow with 5 more data releases pending in 2013. The majority of these datasets were flown for mineral resource purposes, with one survey designed for infrastructure analysis. In addition, several USGS datasets are included in this study. The USGS has recently developed new inversion methodologies for airborne EM data and have begun to apply these and other new techniques to the available datasets. These include a trans-dimensional Markov Chain Monte Carlo technique, laterally-constrained regularized inversions, and deterministic inversions which include calibration factors as a free parameter. Incorporation of the magnetic data as an additional constraining dataset has also improved the inversion results. Processing has been completed in several areas, including Fortymile and the Alaska Highway surveys, and continues in others such as the Styx River and Nome surveys. Utilizing these new techniques, we provide models beyond the apparent resistivity maps supplied by the original contractors, allowing us to produce a variety of products, such as maps of resistivity as a function of depth or elevation, cross section maps, and 3D voxel models, which have been treated consistently both in terms of processing and error analysis throughout the state. These products facilitate a more fruitful exchange between geologists and geophysicists and a better understanding of uncertainty, and the process results in iterative development and improvement of geologic models, both on small and large scales.

  14. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  15. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  16. On the generation of tangential ground motion by underground explosions in jointed rocks

    NASA Astrophysics Data System (ADS)

    Vorobiev, Oleg; Ezzedine, Souheil; Antoun, Tarabay; Glenn, Lewis

    2015-03-01

    This paper describes computational studies of tangential ground motions generated by spherical explosions in a heavily jointed granite formation. Various factors affecting the shear wave generation are considered, including joint spacing, orientation and frictional properties. Simulations are performed both in 2-D for a single joint set to elucidate the basic response mechanisms, and in 3-D for multiple joint sets to realistically represent in situ conditions in a realistic geological setting. The joints are modelled explicitly using both contact elements and weakness planes in the material. Simulations are performed both deterministically and stochastically to quantify the effects of geological uncertainties on near field ground motions. The mechanical properties of the rock and the joints as well as the joint spacing and orientation are taken from experimental test data and geophysical logs corresponding to the Climax Stock granitic outcrop, which is the geological setting of the source physics experiment (SPE). Agreement between simulation results and near field wave motion data from SPE enables newfound understanding of the origin and extent of non-spherical motions associated with underground explosions in fractured geological media.

  17. Enterprise resource planning for hospitals.

    PubMed

    van Merode, Godefridus G; Groothuis, Siebren; Hasman, Arie

    2004-06-30

    Integrated hospitals need a central planning and control system to plan patients' processes and the required capacity. Given the changes in healthcare one can ask the question what type of information systems can best support these healthcare delivery organizations. We focus in this review on the potential of enterprise resource planning (ERP) systems for healthcare delivery organizations. First ERP systems are explained. An overview is then presented of the characteristics of the planning process in hospital environments. Problems with ERP that are due to the special characteristics of healthcare are presented. The situations in which ERP can or cannot be used are discussed. It is suggested to divide hospitals in a part that is concerned only with deterministic processes and a part that is concerned with non-deterministic processes. ERP can be very useful for planning and controlling the deterministic processes.

  18. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  19. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  20. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  1. PRMS-IV, the precipitation-runoff modeling system, version 4

    USGS Publications Warehouse

    Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.

    2015-01-01

    Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.

  2. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    PubMed

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.

  3. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China

    PubMed Central

    Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734

  4. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  5. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  6. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  7. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  8. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    NASA Astrophysics Data System (ADS)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.

  9. On the generation of horizontal shear waves by underground explosions in jointed rocks

    DOE PAGES

    Vorobiev, Oleg; Ezzedine, Souheil; Antoun, Tarabay; ...

    2015-02-04

    This paper describes computational studies of non-spherical ground motions generated by spherical explosions in a heavily jointed granite formation. Various factors affecting the shear wave generation are considered, including joint spacing, orientation, persistence and properties. Simulations are performed both in 2D for a single joint set to elucidate the basic response mechanisms, and in 3D for multiple joint sets to realistically represent in situ conditions in a realistic geologic setting. The joints are modeled explicitly using both contact elements and weakness planes in the material. Simulations are performed both deterministically and stochastically to quantify the effects of geologic uncertainties onmore » near field ground motions. The mechanical properties of the rock and the joints as well as the joint spacing and orientation are taken from experimental test data and geophysical logs corresponding to the Climax Stock granitic outcrop, which is the geologic setting of the Source Physics Experiment (SPE). Agreement between simulation results and near field wave motion data from SPE enables newfound understanding of the origin and extent of non-spherical motions associated with underground explosions in fractured geologic media.« less

  10. A comprehensive multi-scenario based approach for a reliable flood-hazard assessment: a case-study application

    NASA Astrophysics Data System (ADS)

    Lanni, Cristiano; Mazzorana, Bruno; Volcan, Claudio; Bertagnolli, Rudi

    2015-04-01

    Flood hazard is generally assessed by assuming the return period of the rainfall as a proxy for the return period of the discharge and the related hydrograph. Frequently this deterministic view is extended also to the straightforward application of hydrodynamic models. However, the climate (i.e. precipitation), the catchment (i.e. geology, soil and antecedent soil-moisture condition) and the anthropogenic (i.e. drainage system and its regulation) systems interact in a complex way, and the occurrence probability of a flood inundation event can significantly differ from the occurrence probability of the triggering event (i.e. rainfall). In order to reliably determine the spatial patterns of flood intensities and probabilities, the rigorous determination of flood event scenarios is beneficial because it provides a clear, rationale method to recognize and unveil the inherent stochastic behavior of natural processes. Therefore, a multi-scenario approach for hazard assessment should be applied and should consider the possible events taking place in the area potentially subject to flooding (i.e. floodplains). Here, we apply a multi-scenario approach for the assessment of the flood hazard around the Idro lake (Italy). We consider and estimate the probability of occurrence of several scenarios related to the initial (i.e. initial water level in the lake) and boundary (i.e. shape of the hydrograph, downslope drainage, spillway opening operations) conditions characterizing the lake. Finally, we discuss the advantages and issues of the presented methodological procedure compared to traditional (and essentially deterministic) approaches.

  11. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  12. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  13. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  14. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  15. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  16. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  17. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most

  18. Calibration of a distributed routing rainfall-runoff model at four urban sites near Miami, Florida

    USGS Publications Warehouse

    Doyle, W. Harry; Miller, Jeffrey E.

    1980-01-01

    Urban stormwater data from four Miami, Fla. catchments were collected and compiled by the U.S. Geological Survey and were used for testing the applicability of deterministic modeling for characterizing stormwater flows from small land-use areas. A description of model calibration and verification is presented for: (1) A 40.8 acre single-family residential area, (2) a 58.3-acre highway area, (3) a 20.4-acre commercial area, and (4) a 14.7-acre multifamily residential area. Rainfall-runoff data for 80, 108, 114, and 52 storms at sites, 1, 2, 3, and 4, respectively, were collected, analyzed, and stored on direct-access files. Rainfall and runoff data for these storms (at 1-minute time intervals) were used in flow-modeling simulation analyses. A distributed routing Geological Survey rainfall-runoff model was used to determine rainfall excess and route overland and channel flows at each site. Optimization of soil-moisture- accounting and infiltration parameters was performed during the calibration phases. The results of this study showed that, with qualifications, an acceptable verification of the Geological Survey model can be achieved. (Kosco-USGS)

  19. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  20. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  1. Scattering effects of machined optical surfaces

    NASA Astrophysics Data System (ADS)

    Thompson, Anita Kotha

    1998-09-01

    Optical fabrication is one of the most labor-intensive industries in existence. Lensmakers use pitch to affix glass blanks to metal chucks that hold the glass as they grind it with tools that have not changed much in fifty years. Recent demands placed on traditional optical fabrication processes in terms of surface accuracy, smoothnesses, and cost effectiveness has resulted in the exploitation of precision machining technology to develop a new generation of computer numerically controlled (CNC) optical fabrication equipment. This new kind of precision machining process is called deterministic microgrinding. The most conspicuous feature of optical surfaces manufactured by the precision machining processes (such as single-point diamond turning or deterministic microgrinding) is the presence of residual cutting tool marks. These residual tool marks exhibit a highly structured topography of periodic azimuthal or radial deterministic marks in addition to random microroughness. These distinct topographic features give rise to surface scattering effects that can significantly degrade optical performance. In this dissertation project we investigate the scattering behavior of machined optical surfaces and their imaging characteristics. In particular, we will characterize the residual optical fabrication errors and relate the resulting scattering behavior to the tool and machine parameters in order to evaluate and improve the deterministic microgrinding process. Other desired information derived from the investigation of scattering behavior is the optical fabrication tolerances necessary to satisfy specific image quality requirements. Optical fabrication tolerances are a major cost driver for any precision optical manufacturing technology. The derivation and control of the optical fabrication tolerances necessary for different applications and operating wavelength regimes will play a unique and central role in establishing deterministic microgrinding as a preferred and a cost-effective optical fabrication process. Other well understood optical fabrication processes will also be reviewed and a performance comparison with the conventional grinding and polishing technique will be made to determine any inherent advantages in the optical quality of surfaces produced by other techniques.

  2. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  3. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Chaos emerging in soil failure patterns observed during tillage: Normalized deterministic nonlinear prediction (NDNP) and its application.

    PubMed

    Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V

    2017-03-01

    Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.

  5. Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Lee, Kim Fook; Kumar, Prem

    2007-09-15

    By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.

  6. Sources of shaking and flooding during the Tohoku-Oki earthquake: a mixture of rupture styles

    USGS Publications Warehouse

    Wei, Shengji; Graves, Robert; Helmberger, Don; Avouac, Jean-Philippe; Jiang, Junle

    2012-01-01

    Modeling strong ground motions from great subduction zone earthquakes is one of the great challenges of computational seismology. To separate the rupture characteristics from complexities caused by 3D sub-surface geology requires an extraordinary data set such as provided by the recent Mw9.0 Tohoku-Oki earthquake. Here we combine deterministic inversion and dynamically guided forward simulation methods to model over one thousand high-rate GPS and strong motion observations from 0 to 0.25 Hz across the entire Honshu Island. Our results display distinct styles of rupture with a deeper generic interplate event (~Mw8.5) transitioning to a shallow tsunamigenic earthquake (~Mw9.0) at about 25 km depth in a process driven by a strong dynamic weakening mechanism, possibly thermal pressurization. This source model predicts many important features of the broad set of seismic, geodetic and seafloor observations providing a major advance in our understanding of such great natural hazards.

  7. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.

    PubMed

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P

    2014-03-04

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.

  8. Use of a remotely piloted aircraft system for hazard assessment in a rocky mining area (Lucca, Italy)

    NASA Astrophysics Data System (ADS)

    Salvini, Riccardo; Mastrorocco, Giovanni; Esposito, Giuseppe; Di Bartolo, Silvia; Coggan, John; Vanneschi, Claudio

    2018-01-01

    The use of remote sensing techniques is now common practice in different working environments, including engineering geology. Moreover, in recent years the development of structure from motion (SfM) methods, together with rapid technological improvement, has allowed the widespread use of cost-effective remotely piloted aircraft systems (RPAS) for acquiring detailed and accurate geometrical information even in evolving environments, such as mining contexts. Indeed, the acquisition of remotely sensed data from hazardous areas provides accurate 3-D models and high-resolution orthophotos minimizing the risk for operators. The quality and quantity of the data obtainable from RPAS surveys can then be used for inspection of mining areas, audit of mining design, rock mass characterizations, stability analysis investigations and monitoring activities. Despite the widespread use of RPAS, its potential and limitations still have to be fully understood.In this paper a case study is shown where a RPAS was used for the engineering geological investigation of a closed marble mine area in Italy; direct ground-based techniques could not be applied for safety reasons. In view of the re-activation of mining operations, high-resolution images taken from different positions and heights were acquired and processed using SfM techniques to obtain an accurate and detailed 3-D model of the area. The geometrical and radiometrical information was subsequently used for a deterministic rock mass characterization, which led to the identification of two large marble blocks that pose a potential significant hazard issue for the future workforce. A preliminary stability analysis, with a focus on investigating the contribution of potential rock bridges, was then performed in order to demonstrate the potential use of RPAS information in engineering geological contexts for geohazard identification, awareness and reduction.

  9. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Crump, Alex R.; Resch, Charles T.

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less

  10. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  11. Estimate Tsunami Flow Conditions and Large-Debris Tracks for the Design of Coastal Infrastructures along Coastlines of the U.S. Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.

    2017-12-01

    The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.

  12. Computer modeling of dynamic necking in bars

    NASA Astrophysics Data System (ADS)

    Partom, Yehuda; Lindenfeld, Avishay

    2017-06-01

    Necking of thin bodies (bars, plates, shells) is one form of strain localization in ductile materials that may lead to fracture. The phenomenon of necking has been studied extensively, initially for quasistatic loading and then also for dynamic loading. Nevertheless, many issues concerning necking are still unclear. Among these are: 1) is necking a random or deterministic process; 2) how does the specimen choose the final neck location; 3) to what extent do perturbations (material or geometrical) influence the neck forming process; and 4) how do various parameters (material, geometrical, loading) influence the neck forming process. Here we address these issues and others using computer simulations with a hydrocode. Among other things we find that: 1) neck formation is a deterministic process, and by changing one of the parameters influencing it monotonously, the final neck location moves monotonously as well; 2) the final neck location is sensitive to the radial velocity of the end boundaries, and as motion of these boundaries is not fully controlled in tests, this may be the reason why neck formation is sometimes regarded as a random process; and 3) neck formation is insensitive to small perturbations, which is probably why it is a deterministic process.

  13. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  14. Predicting the Stochastic Properties of the Shallow Subsurface for Improved Geophysical Modeling

    NASA Astrophysics Data System (ADS)

    Stroujkova, A.; Vynne, J.; Bonner, J.; Lewkowicz, J.

    2005-12-01

    Strong ground motion data from numerous explosive field experiments and from moderate to large earthquakes show significant variations in amplitude and waveform shape with respect to both azimuth and range. Attempts to model these variations using deterministic models have often been unsuccessful. It has been hypothesized that a stochastic description of the geological medium is a more realistic approach. To estimate the stochastic properties of the shallow subsurface, we use Measurement While Drilling (MWD) data, which are routinely collected by mines in order to facilitate design of blast patterns. The parameters, such as rotation speed of the drill, torque, and penetration rate, are used to compute the rock's Specific Energy (SE), which is then related to a blastability index. We use values of SE measured at two different mines and calibrated to laboratory measurements of rock properties to determine correlation lengths of the subsurface rocks in 2D, needed to obtain 2D and 3D stochastic models. The stochastic models are then combined with the deterministic models and used to compute synthetic seismic waveforms.

  15. Probabilistic risk assessment for CO2 storage in geological formations: robust design and support for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang

    2010-05-01

    CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces

  16. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  17. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    PubMed

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  18. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; MacLaurin, James

    2018-06-01

    Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .

  19. An Evaluation Tool for CONUS-Scale Estimates of Components of the Water Balance

    NASA Astrophysics Data System (ADS)

    Saxe, S.; Hay, L.; Farmer, W. H.; Markstrom, S. L.; Kiang, J. E.

    2016-12-01

    Numerous research groups are independently developing data products to represent various components of the water balance (e.g. runoff, evapotranspiration, recharge, snow water equivalent, soil moisture, and climate) at the scale of the conterminous United States. These data products are derived from a range of sources, including direct measurement, remotely-sensed measurement, and statistical and deterministic model simulations. An evaluation tool is needed to compare these data products and the components of the water balance they contain in order to identify the gaps in the understanding and representation of continental-scale hydrologic processes. An ideal tool will be an objective, universally agreed upon, framework to address questions related to closing the water balance. This type of generic, model agnostic evaluation tool would facilitate collaboration amongst different hydrologic research groups and improve modeling capabilities with respect to continental-scale water resources. By adopting a comprehensive framework to consider hydrologic modeling in the context of a complete water balance, it is possible to identify weaknesses in process modeling, data product representation and regional hydrologic variation. As part of its National Water Census initiative, the U.S. Geological survey is facilitating this dialogue to developing prototype evaluation tools.

  20. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem

    PubMed Central

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.

    2014-01-01

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501

  1. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  2. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    NASA Astrophysics Data System (ADS)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  3. Soil heterogeneity in Mojave Desert shrublands: Biotic and abiotic processes

    NASA Astrophysics Data System (ADS)

    Caldwell, Todd G.; Young, Michael H.; McDonald, Eric V.; Zhu, Jianting

    2012-09-01

    Geological and ecological processes play critical roles in the evolution of desert piedmonts. Feedback between fast cyclic biotic and slow cumulative pedogenic processes on arid alluvial fan systems results in a heterogeneous landscape of interspace and canopy microsites. Defining the spatial extent between these processes will allow a better connection to ecosystem service and climate change. We use a soil chronosequence in the Mojave Desert and high spatial resolution infiltrometer measurements along transects radiating from canopies of perennial shrubs to assess the extent of biotic and abiotic processes and the heterogeneity of soil properties in arid shrublands. Results showed higher saturated conductivity under vegetation regardless of surface age, but it was more conspicuous on older, developed soils. At proximal locations to the shrub, bulk density, soil structure grade, silt, and clay content significantly increased radially from the canopy, while sand and organic material decreased. Soil properties at distal locations 2-5 times the canopy radius had no significant spatial correlation. The extent of the biotic influence of the shrub was 1.34 ± 0.32 times the canopy radius. Hydraulic properties were weakly correlated in space, but 75% of the variance could be attributed to sand content, soil structure grade, mean-particle diameter, and soil organic material, none of which are exclusively biotic or abiotic. The fast cyclic biotic processes occurring under vegetation are clearly overprinted on slow cumulative abiotic processes, resulting in the deterministic variability observed at the plant scale.

  4. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  5. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  6. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  7. Class III Mid-Term Project, "Increasing Heavy Oil Reserves in the Wilmington Oil Field Through Advanced Reservoir Characterization and Thermal Production Technologies"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Hara

    2007-03-31

    The overall objective of this project was to increase heavy oil reserves in slope and basin clastic (SBC) reservoirs through the application of advanced reservoir characterization and thermal production technologies. The project involved improving thermal recovery techniques in the Tar Zone of Fault Blocks II-A and V (Tar II-A and Tar V) of the Wilmington Field in Los Angeles County, near Long Beach, California. A primary objective has been to transfer technology that can be applied in other heavy oil formations of the Wilmington Field and other SBC reservoirs, including those under waterflood. The first budget period addressed several producibilitymore » problems in the Tar II-A and Tar V thermal recovery operations that are common in SBC reservoirs. A few of the advanced technologies developed include a three-dimensional (3-D) deterministic geologic model, a 3-D deterministic thermal reservoir simulation model to aid in reservoir management and subsequent post-steamflood development work, and a detailed study on the geochemical interactions between the steam and the formation rocks and fluids. State of the art operational work included drilling and performing a pilot steam injection and production project via four new horizontal wells (2 producers and 2 injectors), implementing a hot water alternating steam (WAS) drive pilot in the existing steamflood area to improve thermal efficiency, installing a 2400-foot insulated, subsurface harbor channel crossing to supply steam to an island location, testing a novel alkaline steam completion technique to control well sanding problems, and starting on an advanced reservoir management system through computer-aided access to production and geologic data to integrate reservoir characterization, engineering, monitoring, and evaluation. The second budget period phase (BP2) continued to implement state-of-the-art operational work to optimize thermal recovery processes, improve well drilling and completion practices, and evaluate the geomechanical characteristics of the producing formations. The objectives were to further improve reservoir characterization of the heterogeneous turbidite sands, test the proficiency of the three-dimensional geologic and thermal reservoir simulation models, identify the high permeability thief zones to reduce water breakthrough and cycling, and analyze the nonuniform distribution of the remaining oil in place. This work resulted in the redevelopment of the Tar II-A and Tar V post-steamflood projects by drilling several new wells and converting idle wells to improve injection sweep efficiency and more effectively drain the remaining oil reserves. Reservoir management work included reducing water cuts, maintaining or increasing oil production, and evaluating and minimizing further thermal-related formation compaction. The BP2 project utilized all the tools and knowledge gained throughout the DOE project to maximize recovery of the oil in place.« less

  8. Finite-size effects and switching times for Moran process with mutation.

    PubMed

    DeVille, Lee; Galiardi, Meghan

    2017-04-01

    We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.

  9. Spatial Scaling of Floods in Atlantic Coastal Watersheds

    NASA Astrophysics Data System (ADS)

    Plank, C.

    2013-12-01

    Climate and land use changes are altering global, regional and local hydrologic cycles. As a result, past events may not accurately represent the events that will occur in the future. Methods for hydrologic prediction, both statistical and deterministic, require adequate data for calibration. Streamflow gauges tend to be located on large rivers. As a result, statistical flood frequency analysis, which relies on gauge data, is biased towards large watersheds. Conversely, the complexity of parameterizing watershed processes in deterministic hydrological models limits these to small watersheds. Spatial scaling relationships between drainage basin area and discharge can be used to bridge these two methodologies and provide new approaches to hydrologic prediction. The relationship of discharge (Q) to drainage basin area (A) can be expressed as a power function: Q = αAθ. This study compares scaling exponents (θ) and coefficients (α) for floods of varying magnitude across a selection of major Atlantic Coast watersheds. Comparisons are made by normalizing flood discharges to a reference area bankfull discharge for each watershed. These watersheds capture the geologic and geomorphic transitions along the Atlantic Coast from narrow bedrock-dominated river valleys to wide coastal plain watersheds. Additionally, there is a range of hydrometeorological events that cause major floods in these basins including tropical storms, thunderstorm systems and winter-spring storms. The mix of flood-producing events changes along a gradient as well, with tropical storms and hurricanes increasing in dominance from north to south as a significant cause of major floods. Scaling exponents and coefficients were determined for both flood quantile estimates (e.g. 1.5-, 10-, 100-year floods) and selected hydrometeorological events (e.g. hurricanes, summer thunderstorms, winter-spring storms). Initial results indicate that southern coastal plain watersheds have lower scaling exponents (θ) than northern watersheds. However, the relative magnitudes of 100-year and other large floods are higher in the coastal plain rivers. In the transition zone between northern and southern watersheds, basins like the Potomac in the Mid-Atlantic region have similar scaling exponents as northern river basins, but relative flood magnitudes comparable to the southern coastal plain watersheds. These differences reflect variations in both geologic/geomorphic and climatic settings. Understanding these variations are important to appropriately using these relationships to improve flood risk models and analyses.

  10. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  11. Deterministic quantum state transfer and remote entanglement using microwave photons.

    PubMed

    Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A

    2018-06-01

    Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.

  12. A random walk on water (Henry Darcy Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2009-04-01

    Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?

  13. Integrating geological uncertainty in long-term open pit mine production planning by ant colony optimization

    NASA Astrophysics Data System (ADS)

    Gilani, Seyed-Omid; Sattarvand, Javad

    2016-02-01

    Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.

  14. Evidence for determinism in species diversification and contingency in phenotypic evolution during adaptive radiation.

    PubMed

    Burbrink, Frank T; Chen, Xin; Myers, Edward A; Brandley, Matthew C; Pyron, R Alexander

    2012-12-07

    Adaptive radiation (AR) theory predicts that groups sharing the same source of ecological opportunity (EO) will experience deterministic species diversification and morphological evolution. Thus, deterministic ecological and morphological evolution should be correlated with deterministic patterns in the tempo and mode of speciation for groups in similar habitats and time periods. We test this hypothesis using well-sampled phylogenies of four squamate groups that colonized the New World (NW) in the Late Oligocene. We use both standard and coalescent models to assess species diversification, as well as likelihood models to examine morphological evolution. All squamate groups show similar early pulses of speciation, as well as diversity-dependent ecological limits on clade size at a continental scale. In contrast, processes of morphological evolution are not easily predictable and do not show similar pulses of early and rapid change. Patterns of morphological and species diversification thus appear uncoupled across these groups. This indicates that the processes that drive diversification and disparification are not mechanistically linked, even among similar groups of taxa experiencing the same sources of EO. It also suggests that processes of phenotypic diversification cannot be predicted solely from the existence of an AR or knowledge of the process of diversification.

  15. Local-scale Partitioning of Functional and Phylogenetic Beta Diversity in a Tropical Tree Assemblage.

    PubMed

    Yang, Jie; Swenson, Nathan G; Zhang, Guocheng; Ci, Xiuqin; Cao, Min; Sha, Liqing; Li, Jie; Ferry Slik, J W; Lin, Luxiang

    2015-08-03

    The relative degree to which stochastic and deterministic processes underpin community assembly is a central problem in ecology. Quantifying local-scale phylogenetic and functional beta diversity may shed new light on this problem. We used species distribution, soil, trait and phylogenetic data to quantify whether environmental distance, geographic distance or their combination are the strongest predictors of phylogenetic and functional beta diversity on local scales in a 20-ha tropical seasonal rainforest dynamics plot in southwest China. The patterns of phylogenetic and functional beta diversity were generally consistent. The phylogenetic and functional dissimilarity between subplots (10 × 10 m, 20 × 20 m, 50 × 50 m and 100 × 100 m) was often higher than that expected by chance. The turnover of lineages and species function within habitats was generally slower than that across habitats. Partitioning the variation in phylogenetic and functional beta diversity showed that environmental distance was generally a better predictor of beta diversity than geographic distance thereby lending relatively more support for deterministic environmental filtering over stochastic processes. Overall, our results highlight that deterministic processes play a stronger role than stochastic processes in structuring community composition in this diverse assemblage of tropical trees.

  16. Evidence for determinism in species diversification and contingency in phenotypic evolution during adaptive radiation

    PubMed Central

    Burbrink, Frank T.; Chen, Xin; Myers, Edward A.; Brandley, Matthew C.; Pyron, R. Alexander

    2012-01-01

    Adaptive radiation (AR) theory predicts that groups sharing the same source of ecological opportunity (EO) will experience deterministic species diversification and morphological evolution. Thus, deterministic ecological and morphological evolution should be correlated with deterministic patterns in the tempo and mode of speciation for groups in similar habitats and time periods. We test this hypothesis using well-sampled phylogenies of four squamate groups that colonized the New World (NW) in the Late Oligocene. We use both standard and coalescent models to assess species diversification, as well as likelihood models to examine morphological evolution. All squamate groups show similar early pulses of speciation, as well as diversity-dependent ecological limits on clade size at a continental scale. In contrast, processes of morphological evolution are not easily predictable and do not show similar pulses of early and rapid change. Patterns of morphological and species diversification thus appear uncoupled across these groups. This indicates that the processes that drive diversification and disparification are not mechanistically linked, even among similar groups of taxa experiencing the same sources of EO. It also suggests that processes of phenotypic diversification cannot be predicted solely from the existence of an AR or knowledge of the process of diversification. PMID:23034709

  17. UAV, LiDAR & ground-based surveying from Stackpole Quay: best practice for accuracy of virtual outcrops and structural models

    NASA Astrophysics Data System (ADS)

    Cawood, A.; Bond, C. E.; Howell, J.; Totake, Y.

    2016-12-01

    Virtual outcrops derived from techniques such as LiDAR and SfM (digital photogrammetry) provide a viable and potentially powerful addition or alternative to traditional field studies, given the large amounts of raw data that can be acquired rapidly and safely. The use of these digital representations of outcrops as a source of geological data has increased greatly in the past decade, and as such, the accuracy and precision of these new acquisition methods applied to geological problems has been addressed by a number of authors. Little work has been done, however, on the integration of virtual outcrops into fundamental structural geology workflows and to systematically studying the fidelity of the data derived from them. Here, we use the classic Stackpole Quay syncline outcrop in South Wales to quantitatively evaluate the accuracy of three virtual outcrop models (LiDAR, aerial and terrestrial digital photogrammetry) compared to data collected directly in the field. Using these structural data, we have built 2D and 3D geological models which make predictions of fold geometries. We examine the fidelity of virtual outcrops generated using different acquisition techniques to outcrop geology and how these affect model building and final outcomes. Finally, we utilize newly acquired data to deterministically test model validity. Based upon these results, we find that acquisition of digital imagery by UAS (Unmanned Autonomous Vehicle) yields highly accurate virtual outcrops when compared to terrestrial methods, allowing the construction of robust data-driven predictive models. Careful planning, survey design and choice of suitable acquisition method are, however, of key importance for best results.

  18. Deterministic seismogenic scenarios based on asperities spatial distribution to assess tsunami hazard on northern Chile (18°S to 24°S)

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.

    2016-12-01

    Southern Peru and northern Chile coastal areas, extended between 12º to 24ºS, have been recognized as a mature seismic gap with a high seismogenic potential associated to seismic moment deficit accumulated since 1877. An important scientific question is which will be the breaking pattern of a future megathrust earthquake, being relevant from hazard assessment perspective. During the last decade, the occurrence of three major subduction earthquakes has given the possibility to acquire outstanding geophysical and geological information to know the behavior of phenomena. An interesting result is the relationship between the maximum slip areas and the spatial distribution of asperities in subduction zones. In this contribution, we propose a methodology to identify a regional pattern of main asperities to construct reliable seismogenic scenarios in a seismic gap. We follow a deterministic approach to explore the distribution of asperities segmentation using geophysical and geodetic data as trench-parallel gravity anomaly (TPGA), interseismic coupling (ISC), b-value, historical moment release, residual bathymetric and gravity anomalies. The combined information represents physical constraints for short and long term suitable regions for future mega earthquakes. To illuminate the asperities distribution, we construct profiles using fault coordinates, along-strike and down-dip direction, of all proxies to define the boundaries of a major asperities (> 100 km). The geometry of a major asperity is useful to define a finite set of future deterministic seismogenic scenarios to evaluate tsunamigenic hazard in main cities of northern zone of Chile (18°S to 24°S).

  19. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  20. Deterministic Migration-Based Separation of White Blood Cells.

    PubMed

    Kim, Byeongyeon; Choi, Young Joon; Seo, Hyekyung; Shin, Eui-Cheol; Choi, Sungyoung

    2016-10-01

    Functional and phenotypic analyses of peripheral white blood cells provide useful clinical information. However, separation of white blood cells from peripheral blood requires a time-consuming, inconvenient process and thus analyses of separated white blood cells are limited in clinical settings. To overcome this limitation, a microfluidic separation platform is developed to enable deterministic migration of white blood cells, directing the cells into designated positions according to a ridge pattern. The platform uses slant ridge structures on the channel top to induce the deterministic migration, which allows efficient and high-throughput separation of white blood cells from unprocessed whole blood. The extent of the deterministic migration under various rheological conditions is explored, enabling highly efficient migration of white blood cells in whole blood and achieving high-throughput separation of the cells (processing 1 mL of whole blood less than 7 min). In the separated cell population, the composition of lymphocyte subpopulations is well preserved, and T cells secrete cytokines without any functional impairment. On the basis of the results, this microfluidic platform is a promising tool for the rapid enrichment of white blood cells, and it is useful for functional and phenotypic analyses of peripheral white blood cells. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome

    DOE PAGES

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; ...

    2018-04-12

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less

  2. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome.

    PubMed

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; Veach, Allison; Ialonardi, Florencia; Iribarne, Oscar; Silliman, Brian

    2018-06-01

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a null model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. Furthermore, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization. © 2018 by the Ecological Society of America.

  3. Modeling the within-host dynamics of cholera: bacterial-viral interaction.

    PubMed

    Wang, Xueying; Wang, Jin

    2017-08-01

    Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.

  4. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less

  5. Chance of Necessity: Modeling Origins of Life

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The fundamental nature of processes that led to the emergence of life has been a subject of long-standing debate. One view holds that the origin of life is an event governed by chance, and the result of so many random events is unpredictable. This view was eloquently expressed by Jacques Monod in his book Chance or Necessity. In an alternative view, the origin of life is considered a deterministic event. Its details need not be deterministic in every respect, but the overall behavior is predictable. A corollary to the deterministic view is that the emergence of life must have been determined primarily by universal chemistry and biochemistry rather than by subtle details of environmental conditions. In my lecture I will explore two different paradigms for the emergence of life and discuss their implications for predictability and universality of life-forming processes. The dominant approach is that the origin of life was guided by information stored in nucleic acids (the RNA World hypothesis). In this view, selection of improved combinations of nucleic acids obtained through random mutations drove evolution of biological systems from their conception. An alternative hypothesis states that the formation of protocellular metabolism was driven by non-genomic processes. Even though these processes were highly stochastic the outcome was largely deterministic, strongly constrained by laws of chemistry. I will argue that self-replication of macromolecules was not required at the early stages of evolution; the reproduction of cellular functions alone was sufficient for self-maintenance of protocells. In fact, the precise transfer of information between successive generations of the earliest protocells was unnecessary and could have impeded the discovery of cellular metabolism. I will also show that such concepts as speciation and fitness to the environment, developed in the context of genomic evolution also hold in the absence of a genome.

  6. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less

  8. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  9. Deterministic magnetorheological finishing of optical aspheric mirrors

    NASA Astrophysics Data System (ADS)

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi; Shi, Feng

    2009-05-01

    A new method magnetorheological finishing (MRF) used for deterministical finishing of optical aspheric mirrors is applied to overcome some disadvantages including low finishing efficiency, long iterative time and unstable convergence in the process of conventional polishing. Based on the introduction of the basic principle of MRF, the key techniques to implement deterministical MRF are also discussed. To demonstrate it, a 200 mm diameter K9 class concave asphere with a vertex radius of 640mm was figured on MRF polish tool developed by ourselves. Through one process about two hours, the surface accuracy peak-to-valley (PV) is improved from initial 0.216λ to final 0.179λ and root-mean-square (RMS) is improved from 0.027λ to 0.017λ (λ = 0.6328um ). High-precision and high-efficiency convergence of optical aspheric surface error shows that MRF is an advanced optical manufacturing method that owns high convergence ratio of surface figure, high precision of optical surfacing, stabile and controllable finishing process. Therefore, utilizing MRF to finish optical aspheric mirrors determinately is credible and stabile; its advantages can be also used for finishing optical elements on varieties of types such as plane mirrors and spherical mirrors.

  10. Three-dimensional modelling and geothermal process simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, K.L.

    1990-01-01

    The subsurface geological model or 3-D GIS is constructed from three kinds of objects, which are a lithotope (in boundary representation), a number of fault systems, and volumetric textures (vector fields). The chief task of the model is to yield an estimate of the conductance tensors (fluid permeability and thermal conductivity) throughout an array of voxels. This is input as material properties to a FEHM numerical physical process model. The main task of the FEHM process model is to distinguish regions of convective from regions of conductive heat flow, and to estimate the fluid phase, pressure and flow paths. Themore » temperature, geochemical, and seismic data provide the physical constraints on the process. The conductance tensors in the Franciscan Complex are to be derived by the addition of two components. The isotropic component is a stochastic spatial variable due to disruption of lithologies in melange. The deviatoric component is deterministic, due to smoothness and continuity in the textural vector fields. This decomposition probably also applies to the engineering hydrogeological properties of shallow terrestrial fluvial systems. However there are differences in quantity. The isotropic component is much more variable in the Franciscan, to the point where volumetric averages are misleading, and it may be necessary to select that component from several, discrete possible states. The deviatoric component is interpolated using a textural vector field. The Franciscan field is much more complicated, and contains internal singularities. 27 refs., 10 figs.« less

  11. Topic III - Infiltration and Drainage: A section in Joint US Geological Survey, US Nuclear Regulatory Commission workshop on research related to low-level radioactive waste disposal, May 4-6, 1993, National Center, Reston, Virginia; Proceedings (WRI 95-4015)

    USGS Publications Warehouse

    Prudic, David E.; Gee, Glendon; Stevens, Peter R.; Nicholson, Thomas J.

    1996-01-01

    Infiltration into and drainage from facilities for the disposal of low-level radioactive wastes is considered the major process by which non-volatile contaminants are transported away from the facilities. The session included 10 papers related to the processes of infiltration and drainage, and to the simulation of flow and transport through the unsaturated zone. The first paper, presented by David Stonestrom, was an overview regarding the application of unsaturated flow theory to infiltration and drainage. Stonestrom posed three basic questions, which are:How well do we know the relevant processes affecting flow and transport?How well can we measure the parametric functions used to quantify flow and transport?How do we treat complexities inherent in field settings?The other nine papers presented during the session gave some insight to these questions. Topics included: laboratory measurement of unsaturated hydraulic conductivities at low water contents, by John Nimmo; use of environmental tracers to identify preferential flow through fractured media and to quantify drainage, by Edmund Prych and Edwin Weeks; field experiments to evaluate relevant processes affecting infiltration and drainage, by Brian Andraski, Glendon Gee, and Peter Wierenga; and the use of determinist'c and stochastic models for simulating flow and transport through heterogeneous sediments, by Richard Hills, Lynn Gelhar, and Shlomo Neuman.

  12. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  13. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications.

    PubMed

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-10

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  14. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  15. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  16. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    NASA Astrophysics Data System (ADS)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  17. Velocity Model Analysis Based on Integrated Well and Seismic Data of East Java Basin

    NASA Astrophysics Data System (ADS)

    Mubin, Fathul; Widya, Aviandy; Eka Nurcahya, Budi; Nurul Mahmudah, Erma; Purwaman, Indro; Radityo, Aryo; Shirly, Agung; Nurwani, Citra

    2018-03-01

    Time to depth conversion is an important processof seismic interpretationtoidentify hydrocarbonprospectivity. Main objectives of this research are to minimize the risk of error in geometry and time to depth conversion. Since it’s using a large amount of data and had been doing in the large scale of research areas, this research can be classified as a regional scale research. The research was focused on three horizons time interpretation: Top Kujung I, Top Ngimbang and Basement which located in the offshore and onshore areas of east Java basin. These three horizons was selected because they were assumed to be equivalent to the rock formation, which is it has always been the main objective of oil and gas exploration in the East Java Basin. As additional value, there was no previous works on velocity modeling for regional scale using geological parameters in East Java basin. Lithology and interval thickness were identified as geological factors that effected the velocity distribution in East Java Basin. Therefore, a three layer geological model was generated, which was defined by the type of lithology; carbonate (layer 1: Top Kujung I), shale (layer 2: Top Ngimbang) and Basement. A statistical method using three horizons is able to predict the velocity distribution on sparse well data in a regional scale. The average velocity range for Top Kujung I is 400 m/s - 6000 m/s, Top Ngimbang is 500 m/s - 8200 m/s and Basement is 600 m/s - 8000 m/s. Some velocity anomalies found in Madura sub-basin area, caused by geological factor which identified as thick shale deposit and high density values on shale. Result of velocity and depth modeling analysis can be used to define the volume range deterministically and to make geological models to prospect generation in details by geological concept.

  18. Precision production: enabling deterministic throughput for precision aspheres with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  19. Incorporating uncertainty into mercury-offset decisions with a probabilistic network for National Pollutant Discharge Elimination System permit holders: an interim report

    USGS Publications Warehouse

    Wood, Alexander

    2004-01-01

    This interim report describes an alternative approach for evaluating the efficacy of using mercury (Hg) offsets to improve water quality. Hg-offset programs may allow dischargers facing higher-pollution control costs to meet their regulatory obligations by making more cost effective pollutant-reduction decisions. Efficient Hg management requires methods to translate that science and economics into a regulatory decision framework. This report documents the work in progress by the U.S. Geological Surveys Western Geographic Science Center in collaboration with Stanford University toward developing this decision framework to help managers, regulators, and other stakeholders decide whether offsets can cost effectively meet the Hg total maximum daily load (TMDL) requirements in the Sacramento River watershed. Two key approaches being considered are: (1) a probabilistic approach that explicitly incorporates scientific uncertainty, cost information, and value judgments; and (2) a quantitative approach that captures uncertainty in testing the feasibility of Hg offsets. Current fate and transport-process models commonly attempt to predict chemical transformations and transport pathways deterministically. However, the physical, chemical, and biologic processes controlling the fate and transport of Hg in aquatic environments are complex and poorly understood. Deterministic models of Hg environmental behavior contain large uncertainties, reflecting this lack of understanding. The uncertainty in these underlying physical processes may produce similarly large uncertainties in the decisionmaking process. However, decisions about control strategies are still being made despite the large uncertainties in current Hg loadings, the relations between total Hg (HgT) loading and methylmercury (MeHg) formation, and the relations between control efforts and Hg content in fish. The research presented here focuses on an alternative analytical approach to the current use of safety factors and deterministic methods for Hg TMDL decision support, one that is fully compatible with an adaptive management approach. This alternative approach uses empirical data and informed judgment to provide a scientific and technical basis for helping National Pollutant Discharge Elimination System (NPDES) permit holders make management decisions. An Hg-offset system would be an option if a wastewater-treatment plant could not achieve NPDES permit requirements for HgT reduction. We develop a probabilistic decision-analytical model consisting of three submodels for HgT loading, MeHg, and cost mitigation within a Bayesian network that integrates information of varying rigor and detail into a simple model of a complex system. Hg processes are identified and quantified by using a combination of historical data, statistical models, and expert judgment. Such an integrated approach to uncertainty analysis allows easy updating of prediction and inference when observations of model variables are made. We demonstrate our approach with data from the Cache Creek watershed (a subbasin of the Sacramento River watershed). The empirical models used to generate the needed probability distributions are based on the same empirical models currently being used by the Central Valley Regional Water Quality Control Cache Creek Hg TMDL working group. The significant difference is that input uncertainty and error are explicitly included in the model and propagated throughout its algorithms. This work demonstrates how to integrate uncertainty into the complex and highly uncertain Hg TMDL decisionmaking process. The various sources of uncertainty are propagated as decision risk that allows decisionmakers to simultaneously consider uncertainties in remediation/implementation costs while attempting to meet environmental/ecologic targets. We must note that this research is on going. As more data are collected, the HgT and cost-mitigation submodels are updated and the uncer

  20. Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria

    NASA Astrophysics Data System (ADS)

    Joo, Jaewook; Eric, Harvill; Albert, Reka

    2007-03-01

    Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.

  1. Nonlinear dynamics in flow through unsaturated fractured-porous media: Status and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faybishenko, Boris

    2002-11-27

    The need has long been recognized to improve predictions of flow and transport in partially saturated heterogeneous soils and fractured rock of the vadose zone for many practical applications, such as remediation of contaminated sites, nuclear waste disposal in geological formations, and climate predictions. Until recently, flow and transport processes in heterogeneous subsurface media with oscillating irregularities were assumed to be random and were not analyzed using methods of nonlinear dynamics. The goals of this paper are to review the theoretical concepts, present the results, and provide perspectives on investigations of flow and transport in unsaturated heterogeneous soils and fracturedmore » rock, using the methods of nonlinear dynamics and deterministic chaos. The results of laboratory and field investigations indicate that the nonlinear dynamics of flow and transport processes in unsaturated soils and fractured rocks arise from the dynamic feedback and competition between various nonlinear physical processes along with complex geometry of flow paths. Although direct measurements of variables characterizing the individual flow processes are not technically feasible, their cumulative effect can be characterized by analyzing time series data using the models and methods of nonlinear dynamics and chaos. Identifying flow through soil or rock as a nonlinear dynamical system is important for developing appropriate short- and long-time predictive models, evaluating prediction uncertainty, assessing the spatial distribution of flow characteristics from time series data, and improving chemical transport simulations. Inferring the nature of flow processes through the methods of nonlinear dynamics could become widely used in different areas of the earth sciences.« less

  2. INCREASING HEAVY OIL RESERVES IN THE WILMINGTON OIL FIELD THROUGH ADVANCED RESERVOIR CHARACTERIZATION AND THERMAL PRODUCTION TECHNOLOGIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Hara

    2000-02-18

    The project involves using advanced reservoir characterization and thermal production technologies to improve thermal recovery techniques and lower operating and capital costs in a slope and basin clastic (SBC) reservoir in the Wilmington field, Los Angeles Co., CA. Through March 1999, project work has been completed related to data preparation, basic reservoir engineering, developing a deterministic three dimensional (3-D) geologic model, a 3-D deterministic reservoir simulation model, and a rock-log model, well drilling and completions, and surface facilities. Work is continuing on the stochastic geologic model, developing a 3-D stochastic thermal reservoir simulation model of the Fault Block IIA Tarmore » (Tar II-A) Zone, and operational work and research studies to prevent thermal-related formation compaction. Thermal-related formation compaction is a concern of the project team due to observed surface subsidence in the local area above the steamflood project. Last quarter on January 12, the steamflood project lost its inexpensive steam source from the Harbor Cogeneration Plant as a result of the recent deregulation of electrical power rates in California. An operational plan was developed and implemented to mitigate the effects of the two situations. Seven water injection wells were placed in service in November and December 1998 on the flanks of the Phase 1 steamflood area to pressure up the reservoir to fill up the existing steam chest. Intensive reservoir engineering and geomechanics studies are continuing to determine the best ways to shut down the steamflood operations in Fault Block II while minimizing any future surface subsidence. The new 3-D deterministic thermal reservoir simulator model is being used to provide sensitivity cases to optimize production, steam injection, future flank cold water injection and reservoir temperature and pressure. According to the model, reservoir fill up of the steam chest at the current injection rate of 28,000 BPD and gross and net oil production rates of 7,700 BPD and 750 BOPD (injection to production ratio of 4) will occur in October 1999. At that time, the reservoir should act more like a waterflood and production and cold water injection can be operated at lower net injection rates to be determined. Modeling runs developed this quarter found that varying individual well injection rates to meet added production and local pressure problems by sub-zone could reduce steam chest fill-up by up to one month.« less

  3. Deterministic analysis of processes at corroding metal surfaces and the study of electrochemical noise in these systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latanision, R.M.

    1990-12-01

    Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministicmore » viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.« less

  4. On convergence of the unscented Kalman-Bucy filter using contraction theory

    NASA Astrophysics Data System (ADS)

    Maree, J. P.; Imsland, L.; Jouffroy, J.

    2016-06-01

    Contraction theory entails a theoretical framework in which convergence of a nonlinear system can be analysed differentially in an appropriate contraction metric. This paper is concerned with utilising stochastic contraction theory to conclude on exponential convergence of the unscented Kalman-Bucy filter. The underlying process and measurement models of interest are Itô-type stochastic differential equations. In particular, statistical linearisation techniques are employed in a virtual-actual systems framework to establish deterministic contraction of the estimated expected mean of process values. Under mild conditions of bounded process noise, we extend the results on deterministic contraction to stochastic contraction of the estimated expected mean of the process state. It follows that for the regions of contraction, a result on convergence, and thereby incremental stability, is concluded for the unscented Kalman-Bucy filter. The theoretical concepts are illustrated in two case studies.

  5. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  6. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  7. Large conditional single-photon cross-phase modulation

    NASA Astrophysics Data System (ADS)

    Beck, Kristin; Hosseini, Mahdi; Duan, Yiheng; Vuletic, Vladan

    2016-05-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of up to π / 3 between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. With a moderate improvement in cavity finesse, our system can reach a coherent phase shift of p at low loss, enabling deterministic and universal photonic quantum logic. Preprint: arXiv:1512.02166 [quant-ph

  8. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  9. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  10. Trend analysis of Arctic sea ice extent

    NASA Astrophysics Data System (ADS)

    Silva, M. E.; Barbosa, S. M.; Antunes, Luís; Rocha, Conceição

    2009-04-01

    The extent of Arctic sea ice is a fundamental parameter of Arctic climate variability. In the context of climate change, the area covered by ice in the Arctic is a particularly useful indicator of recent changes in the Arctic environment. Climate models are in near universal agreement that Arctic sea ice extent will decline through the 21st century as a consequence of global warming and many studies predict a ice free Arctic as soon as 2012. Time series of satellite passive microwave observations allow to assess the temporal changes in the extent of Arctic sea ice. Much of the analysis of the ice extent time series, as in most climate studies from observational data, have been focussed on the computation of deterministic linear trends by ordinary least squares. However, many different processes, including deterministic, unit root and long-range dependent processes can engender trend like features in a time series. Several parametric tests have been developed, mainly in econometrics, to discriminate between stationarity (no trend), deterministic trend and stochastic trends. Here, these tests are applied in the trend analysis of the sea ice extent time series available at National Snow and Ice Data Center. The parametric stationary tests, Augmented Dickey-Fuller (ADF), Phillips-Perron (PP) and the KPSS, do not support an overall deterministic trend in the time series of Arctic sea ice extent. Therefore, alternative parametrizations such as long-range dependence should be considered for characterising long-term Arctic sea ice variability.

  11. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  12. Integrated Risk-Informed Decision-Making for an ALMR PRISM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlheim, Michael David; Belles, Randy; Denning, Richard S.

    Decision-making is the process of identifying decision alternatives, assessing those alternatives based on predefined metrics, selecting an alternative (i.e., making a decision), and then implementing that alternative. The generation of decisions requires a structured, coherent process, or a decision-making process. The overall objective for this work is that the generalized framework is adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace ormore » supplement human decision makers with reconfigurable decision-making modules that can perform a given set of tasks rationally, consistently, and reliably. Risk-informed decision-making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The probabilistic portion of the decision-making engine of the supervisory control system is based on the control actions associated with an ALMR PRISM. Newly incorporated into the probabilistic models are the prognostic/diagnostic models developed by Pacific Northwest National Laboratory. These allow decisions to incorporate the health of components into the decision–making process. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic portion of the decision-making engine uses thermal-hydraulic modeling and components for an advanced liquid-metal reactor Power Reactor Inherently Safe Module. The deterministic multi-attribute decision-making framework uses various sensor data (e.g., reactor outlet temperature, steam generator drum level) and calculates its position within the challenge state, its trajectory, and its margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. The metrics that are evaluated are based on reactor trip set points. The integration of the deterministic calculations using multi-physics analyses and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermalhydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies, and developing a user interface to mimic display panels at a modern nuclear power plant.« less

  13. The MAFI Project: Mapping Active Faults in Italy by Using Microseismicity Data.

    NASA Astrophysics Data System (ADS)

    Chiarabba, C.; Amato, A.; Augliera, P.; Bagh, S.; Cattaneo, M.; Chiaraluce, L.; de Gori, P.; di Bartolomeo, P.; Govoni, A.; Michelini, A.; Moretti, M.; Piccinini, D.; Romanelli, M.

    2004-12-01

    In past years, earthquake forecasting and seismic hazard in Italy have been approached by using geological and geophysical data yielding only a partial definition of seismic release for the main active structures. In this project, we collect seismological and geodetic data to yield deterministic constraints for seismic hazard studies in areas where large earthquakes are expected to occur in a near future, called lacunae. The basic idea is to massively deploy arrays of instruments in the lacunae areas to acquire seismic and geodetic data with the goals of defining location, geometry and kinematics of the active faults and possibly constraining their strain rate. We selected three target regions: two along the Apennines (Northern Umbria and Abruzzo) and one in the Southern Alps (Alpago-Cansiglio). These areas are characterized by different tectonics and different historical seismic release. We present results for the areas located along the Apennines: the Umbria 2000-2001 and the Abruzzo 2003-2004 experiments while for the Alpago-Cansiglio we are still collecting and processing data. Preliminary results for the Umbria lacuna shows that the collected microearthquakes allow us to clearly recognize the fault system geometry and the deep structure (P- and S-wave velocity and attenuation).

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefeuvre, F.E.; Wrolstad, K.H.; Zou, Ke Shan

    Total and Unocal estimated sand-shale ratios in gas reservoirs from the upper Tertiary clastics of Myanmar. They separately used deterministic pre-stack and statistical post-stack seismic attribute analysis calibrated at two wells to objectively extrapolate the lithologies and reservoir properties several kilometers away from the wells. The two approaches were then integrated and lead to a unique distribution of the sands and shales in the reservoir which fit in the known regional geological model. For the sands, the fluid distributions (gas and brine) were also estimated as well as the porosity, water saturation, thickness and clay content of the sands. Thismore » was made possible by using precise elastic modeling based on the Biot-Gassmann equation in order to integrate the effects of reservoir properties on seismic signatures.« less

  15. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  16. Deterministic Remote Entanglement of Superconducting Circuits through Microwave Two-Photon Transitions

    NASA Astrophysics Data System (ADS)

    Campagne-Ibarcq, P.; Zalys-Geller, E.; Narla, A.; Shankar, S.; Reinhold, P.; Burkhart, L.; Axline, C.; Pfaff, W.; Frunzio, L.; Schoelkopf, R. J.; Devoret, M. H.

    2018-05-01

    Large-scale quantum information processing networks will most probably require the entanglement of distant systems that do not interact directly. This can be done by performing entangling gates between standing information carriers, used as memories or local computational resources, and flying ones, acting as quantum buses. We report the deterministic entanglement of two remote transmon qubits by Raman stimulated emission and absorption of a traveling photon wave packet. We achieve a Bell state fidelity of 73%, well explained by losses in the transmission line and decoherence of each qubit.

  17. Deterministic Remote Entanglement of Superconducting Circuits through Microwave Two-Photon Transitions.

    PubMed

    Campagne-Ibarcq, P; Zalys-Geller, E; Narla, A; Shankar, S; Reinhold, P; Burkhart, L; Axline, C; Pfaff, W; Frunzio, L; Schoelkopf, R J; Devoret, M H

    2018-05-18

    Large-scale quantum information processing networks will most probably require the entanglement of distant systems that do not interact directly. This can be done by performing entangling gates between standing information carriers, used as memories or local computational resources, and flying ones, acting as quantum buses. We report the deterministic entanglement of two remote transmon qubits by Raman stimulated emission and absorption of a traveling photon wave packet. We achieve a Bell state fidelity of 73%, well explained by losses in the transmission line and decoherence of each qubit.

  18. Deterministic quantum teleportation and information splitting via a peculiar W-class state

    NASA Astrophysics Data System (ADS)

    Mei, Feng; Yu, Ya-Fei; Zhang, Zhi-Ming

    2010-02-01

    In the paper (Phys. Rev. 2006 A 74 062320) Agrawal et al. have introduced a kind of W-class state which can be used for the quantum teleportation of single-particle state via a three-particle von Neumann measurement, and they thought that the state could not be used to teleport an unknown state by making two-particle and one-particle measurements. Here we reconsider the features of the W-class state and the quantum teleportation process via the W-class state. We show that, by introducing a unitary operation, the quantum teleportation can be achieved deterministically by making two-particle and one-particle measurements. In addition, our protocol is extended to the process of teleporting two-particle state and splitting information.

  19. A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.

    PubMed

    Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S

    2017-09-01

    We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.

  20. Large conditional single-photon cross-phase modulation

    PubMed Central

    Hosseini, Mahdi; Duan, Yiheng; Vuletić, Vladan

    2016-01-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of π/6 (and up to π/3 by postselection on photons that remain in the system longer than average) between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. By upgrading to a state-of-the-art cavity, our system can reach a coherent phase shift of π at low loss, enabling deterministic and universal photonic quantum logic. PMID:27519798

  1. Community assembly of a euryhaline fish microbiome during salinity acclimation.

    PubMed

    Schmidt, Victor T; Smith, Katherine F; Melvin, Donald W; Amaral-Zettler, Linda A

    2015-05-01

    Microbiomes play a critical role in promoting a range of host functions. Microbiome function, in turn, is dependent on its community composition. Yet, how microbiome taxa are assembled from their regional species pool remains unclear. Many possible drivers have been hypothesized, including deterministic processes of competition, stochastic processes of colonization and migration, and physiological 'host-effect' habitat filters. The contribution of each to assembly in nascent or perturbed microbiomes is important for understanding host-microbe interactions and host health. In this study, we characterized the bacterial communities in a euryhaline fish and the surrounding tank water during salinity acclimation. To assess the relative influence of stochastic versus deterministic processes in fish microbiome assembly, we manipulated the bacterial species pool around each fish by changing the salinity of aquarium water. Our results show a complete and repeatable turnover of dominant bacterial taxa in the microbiomes from individuals of the same species after acclimation to the same salinity. We show that changes in fish microbiomes are not correlated with corresponding changes to abundant taxa in tank water communities and that the dominant taxa in fish microbiomes are rare in the aquatic surroundings, and vice versa. Our results suggest that bacterial taxa best able to compete within the unique host environment at a given salinity appropriate the most niche space, independent of their relative abundance in tank water communities. In this experiment, deterministic processes appear to drive fish microbiome assembly, with little evidence for stochastic colonization. © 2015 John Wiley & Sons Ltd.

  2. Deterministic mechanisms define the long-term anaerobic digestion microbiome and its functionality regardless of the initial microbial community.

    PubMed

    Peces, M; Astals, S; Jensen, P D; Clarke, W P

    2018-05-17

    The impact of the starting inoculum on long-term anaerobic digestion performance, process functionality and microbial community composition remains unclear. To understand the impact of starting inoculum, active microbial communities from four different full-scale anaerobic digesters were each used to inoculate four continuous lab-scale anaerobic digesters, which were operated identically for 295 days. Digesters were operated at 15 days solid retention time, an organic loading rate of 1 g COD L r -1 d -1 (75:25 - cellulose:casein) and 37 °C. Results showed that long-term process performance, metabolic rates (hydrolytic, acetogenic, and methanogenic) and microbial community are independent of the inoculum source. Digesters process performance converged after 80 days, while metabolic rates and microbial communities converged after 120-145 days. The convergence of the different microbial communities towards a core-community proves that the deterministic factors (process operational conditions) were a stronger driver than the initial microbial community composition. Indeed, the core-community represented 72% of the relative abundance among the four digesters. Moreover, a number of positive correlations were observed between higher metabolic rates and the relative abundance of specific microbial groups. These correlations showed that both substrate consumers and suppliers trigger higher metabolic rates, expanding the knowledge of the nexus between microorganisms and functionality. Overall, these results support that deterministic factors control microbial communities in bioreactors independently of the inoculum source. Hence, it seems plausible that a desired microbial composition and functionality can be achieved by tuning process operational conditions. Copyright © 2018. Published by Elsevier Ltd.

  3. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  4. History of Modern Earthquake Hazard Mapping and Assessment in California Using a Deterministic or Scenario Approach

    NASA Astrophysics Data System (ADS)

    Mualchin, Lalliana

    2011-03-01

    Modern earthquake ground motion hazard mapping in California began following the 1971 San Fernando earthquake in the Los Angeles metropolitan area of southern California. Earthquake hazard assessment followed a traditional approach, later called Deterministic Seismic Hazard Analysis (DSHA) in order to distinguish it from the newer Probabilistic Seismic Hazard Analysis (PSHA). In DSHA, seismic hazard in the event of the Maximum Credible Earthquake (MCE) magnitude from each of the known seismogenic faults within and near the state are assessed. The likely occurrence of the MCE has been assumed qualitatively by using late Quaternary and younger faults that are presumed to be seismogenic, but not when or within what time intervals MCE may occur. MCE is the largest or upper-bound potential earthquake in moment magnitude, and it supersedes and automatically considers all other possible earthquakes on that fault. That moment magnitude is used for estimating ground motions by applying it to empirical attenuation relationships, and for calculating ground motions as in neo-DSHA (Z uccolo et al., 2008). The first deterministic California earthquake hazard map was published in 1974 by the California Division of Mines and Geology (CDMG) which has been called the California Geological Survey (CGS) since 2002, using the best available fault information and ground motion attenuation relationships at that time. The California Department of Transportation (Caltrans) later assumed responsibility for printing the refined and updated peak acceleration contour maps which were heavily utilized by geologists, seismologists, and engineers for many years. Some engineers involved in the siting process of large important projects, for example, dams and nuclear power plants, continued to challenge the map(s). The second edition map was completed in 1985 incorporating more faults, improving MCE's estimation method, and using new ground motion attenuation relationships from the latest published results at that time. CDMG eventually published the second edition map in 1992 following the Governor's Board of Inquiry on the 1989 Loma Prieta earthquake and at the demand of Caltrans. The third edition map was published by Caltrans in 1996 utilizing GIS technology to manage data that includes a simplified three-dimension geometry of faults and to facilitate efficient corrections and revisions of data and the map. The spatial relationship of fault hazards with highways, bridges or any other attribute can be efficiently managed and analyzed now in GIS at Caltrans. There has been great confidence in using DSHA in bridge engineering and other applications in California, and it can be confidently applied in any other earthquake-prone region. Earthquake hazards defined by DSHA are: (1) transparent and stable with robust MCE moment magnitudes; (2) flexible in their application to design considerations; (3) can easily incorporate advances in ground motion simulations; and (4) economical. DSHA and neo-DSHA have the same approach and applicability. The accuracy of DSHA has proven to be quite reasonable for practical applications within engineering design and always done with professional judgment. In the final analysis, DSHA is a reality-check for public safety and PSHA results. Although PSHA has been acclaimed as a better approach for seismic hazard assessment, it is DSHA, not PSHA, that has actually been used in seismic hazard assessment for building and bridge engineering, particularly in California.

  5. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  6. Aquatic bacterial assemblage structure in Pozas Azules, Cuatro Cienegas Basin, Mexico: Deterministic vs. stochastic processes.

    PubMed

    Espinosa-Asuar, Laura; Escalante, Ana Elena; Gasca-Pineda, Jaime; Blaz, Jazmín; Peña, Lorena; Eguiarte, Luis E; Souza, Valeria

    2015-06-01

    The aim of this study was to determine the contributions of stochastic vs. deterministic processes in the distribution of microbial diversity in four ponds (Pozas Azules) within a temporally stable aquatic system in the Cuatro Cienegas Basin, State of Coahuila, Mexico. A sampling strategy for sites that were geographically delimited and had low environmental variation was applied to avoid obscuring distance effects. Aquatic bacterial diversity was characterized following a culture-independent approach (16S sequencing of clone libraries). The results showed a correlation between bacterial beta diversity (1-Sorensen) and geographic distance (distance decay of similarity), which indicated the influence of stochastic processes related to dispersion in the assembly of the ponds' bacterial communities. Our findings are the first to show the influence of dispersal limitation in the prokaryotic diversity distribution of Cuatro Cienegas Basin. Copyright© by the Spanish Society for Microbiology and Institute for Catalan Studies.

  7. Photonic Quantum Networks formed from NV− centers

    PubMed Central

    Nemoto, Kae; Trupke, Michael; Devitt, Simon J.; Scharfenberger, Burkhard; Buczak, Kathrin; Schmiedmayer, Jörg; Munro, William J.

    2016-01-01

    In this article we present a simple repeater scheme based on the negatively-charged nitrogen vacancy centre in diamond. Each repeater node is built from modules comprising an optical cavity containing a single NV−, with one nuclear spin from 15N as quantum memory. The module uses only deterministic processes and interactions to achieve high fidelity operations (>99%), and modules are connected by optical fiber. In the repeater node architecture, the processes between modules by photons can be in principle deterministic, however current limitations on optical components lead the processes to be probabilistic but heralded. Our resource-modest repeater architecture contains two modules at each node, and the repeater nodes are then connected by entangled photon pairs. We discuss the performance of such a quantum repeater network with modest resources and then incorporate more resource-intense strategies step by step. Our architecture should allow large-scale quantum information networks with existing or near future technology. PMID:27215433

  8. Photonic Quantum Networks formed from NV(-) centers.

    PubMed

    Nemoto, Kae; Trupke, Michael; Devitt, Simon J; Scharfenberger, Burkhard; Buczak, Kathrin; Schmiedmayer, Jörg; Munro, William J

    2016-05-24

    In this article we present a simple repeater scheme based on the negatively-charged nitrogen vacancy centre in diamond. Each repeater node is built from modules comprising an optical cavity containing a single NV(-), with one nuclear spin from (15)N as quantum memory. The module uses only deterministic processes and interactions to achieve high fidelity operations (>99%), and modules are connected by optical fiber. In the repeater node architecture, the processes between modules by photons can be in principle deterministic, however current limitations on optical components lead the processes to be probabilistic but heralded. Our resource-modest repeater architecture contains two modules at each node, and the repeater nodes are then connected by entangled photon pairs. We discuss the performance of such a quantum repeater network with modest resources and then incorporate more resource-intense strategies step by step. Our architecture should allow large-scale quantum information networks with existing or near future technology.

  9. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  10. Common features and peculiarities of the seismic activity at Phlegraean Fields, Long Valley, and Vesuvius

    USGS Publications Warehouse

    Marzocchi, W.; Vilardo, G.; Hill, D.P.; Ricciardi, G.P.; Ricco, C.

    2001-01-01

    We analyzed and compared the seismic activity that has occurred in the last two to three decades in three distinct volcanic areas: Phlegraean Fields, Italy; Vesuvius, Italy; and Long Valley, California. Our main goal is to identify and discuss common features and peculiarities in the temporal evolution of earthquake sequences that may reflect similarities and differences in the generating processes between these volcanic systems. In particular, we tried to characterize the time series of the number of events and of the seismic energy release in terms of stochastic, deterministic, and chaotic components. The time sequences from each area consist of thousands of earthquakes that allow a detailed quantitative analysis and comparison. The results obtained showed no evidence for either deterministic or chaotic components in the earthquake sequences in Long Valley caldera, which appears to be dominated by stochastic behavior. In contrast, earthquake sequences at Phlegrean Fields and Mount Vesuvius show a deterministic signal mainly consisting of a 24-hour periodicity. Our analysis suggests that the modulation in seismicity is in some way related to thermal diurnal processes, rather than luni-solar tidal effects. Independently from the process that generates these periodicities on the seismicity., it is suggested that the lack (or presence) of diurnal cycles is seismic swarms of volcanic areas could be closely linked to the presence (or lack) of magma motion.

  11. Efficient quantum computing using coherent photon conversion.

    PubMed

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting systems with extremely strong intrinsic nonlinearities. Furthermore, exploiting higher-order nonlinearities with multiple pump fields yields a mechanism for multiparty mediation of the complex, coherent dynamics.

  12. Significant achievements in the Planetary Geology Program. [geologic processes, comparative planetology, and solar system evolution

    NASA Technical Reports Server (NTRS)

    Head, J. W. (Editor)

    1978-01-01

    Developments reported at a meeting of principal investigators for NASA's planetology geology program are summarized. Topics covered include: constraints on solar system formation; asteriods, comets, and satellites; constraints on planetary interiors; volatiles and regoliths; instrument development techniques; planetary cartography; geological and geochemical constraints on planetary evolution; fluvial processes and channel formation; volcanic processes; Eolian processes; radar studies of planetary surfaces; cratering as a process, landform, and dating method; and the Tharsis region of Mars. Activities at a planetary geology field conference on Eolian processes are reported and techniques recommended for the presentation and analysis of crater size-frequency data are included.

  13. Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David

    2015-07-01

    Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or nomore » human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The integration of deterministic calculations using multi-physics analyses (i.e., neutronics, thermal, and thermal-hydraulics) and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermal-hydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies.« less

  14. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    NASA Astrophysics Data System (ADS)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  15. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  16. Deterministic and stochastic methods of calculation of polarization characteristics of radiation in natural environment

    NASA Astrophysics Data System (ADS)

    Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.

    2017-11-01

    We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.

  17. Simulation of daily streamflow for 12 river basins in western Iowa using the Precipitation-Runoff Modeling System

    USGS Publications Warehouse

    Christiansen, Daniel E.; Haj, Adel E.; Risley, John C.

    2017-10-24

    The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, constructed Precipitation-Runoff Modeling System models to estimate daily streamflow for 12 river basins in western Iowa that drain into the Missouri River. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of streamflow and general drainage basin hydrology to various combinations of climate and land use. Calibration periods for each basin varied depending on the period of record available for daily mean streamflow measurements at U.S. Geological Survey streamflow-gaging stations.A geographic information system tool was used to delineate each basin and estimate initial values for model parameters based on basin physical and geographical features. A U.S. Geological Survey automatic calibration tool that uses a shuffled complex evolution algorithm was used for initial calibration, and then manual modifications were made to parameter values to complete the calibration of each basin model. The main objective of the calibration was to match daily discharge values of simulated streamflow to measured daily discharge values. The Precipitation-Runoff Modeling System model was calibrated at 42 sites located in the 12 river basins in western Iowa.The accuracy of the simulated daily streamflow values at the 42 calibration sites varied by river and by site. The models were satisfactory at 36 of the sites based on statistical results. Unsatisfactory performance at the six other sites can be attributed to several factors: (1) low flow, no flow, and flashy flow conditions in headwater subbasins having a small drainage area; (2) poor representation of the groundwater and storage components of flow within a basin; (3) lack of accounting for basin withdrawals and water use; and (4) limited availability and accuracy of meteorological input data. The Precipitation-Runoff Modeling System models of 12 river basins in western Iowa will provide water-resource managers with a consistent and documented method for estimating streamflow at ungaged sites and aid in environmental studies, hydraulic design, water management, and water-quality projects.

  18. Geological, geomechanical and geostatistical assessment of rockfall hazard in San Quirico Village (Abruzzo, Italy)

    NASA Astrophysics Data System (ADS)

    Chiessi, Vittorio; D'Orefice, Maurizio; Scarascia Mugnozza, Gabriele; Vitale, Valerio; Cannese, Christian

    2010-07-01

    This paper describes the results of a rockfall hazard assessment for the village of San Quirico (Abruzzo region, Italy) based on an engineering-geological model. After the collection of geological, geomechanical, and geomorphological data, the rockfall hazard assessment was performed based on two separate approaches: i) simulation of detachment of rock blocks and their downhill movement using a GIS; and ii) application of geostatistical techniques to the analysis of georeferenced observations of previously fallen blocks, in order to assess the probability of arrival of blocks due to potential future collapses. The results show that the trajectographic analysis is significantly influenced by the input parameters, with particular reference to the coefficients of restitution values. In order to solve this problem, the model was calibrated based on repeated field observations. The geostatistical approach is useful because it gives the best estimation of point-source phenomena such as rockfalls; however, the sensitivity of results to basic assumptions, e.g. assessment of variograms and choice of a threshold value, may be problematic. Consequently, interpolations derived from different variograms have been used and compared among them; hence, those showing the lowest errors were adopted. The data sets which were statistically analysed are relevant to both kinetic energy and surveyed rock blocks in the accumulation area. The obtained maps highlight areas susceptible to rock block arrivals, and show that the area accommodating the new settlement of S. Quirico Village has the highest level of hazard according to both probabilistic and deterministic methods.

  19. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  20. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  1. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  2. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  3. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  4. Integrating urban recharge uncertainty into standard groundwater modeling practice: A case study on water main break predictions for the Barton Springs segment of the Edwards Aquifer, Austin, Texas

    NASA Astrophysics Data System (ADS)

    Sinner, K.; Teasley, R. L.

    2016-12-01

    Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling

  5. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes.

    PubMed

    Graham, Emily B; Crump, Alex R; Resch, Charles T; Fansler, Sarah; Arntzen, Evan; Kennedy, David W; Fredrickson, Jim K; Stegen, James C

    2017-04-01

    Subsurface groundwater-surface water mixing zones (hyporheic zones) have enhanced biogeochemical activity, but assembly processes governing subsurface microbiomes remain a critical uncertainty in understanding hyporheic biogeochemistry. To address this obstacle, we investigated (a) biogeographical patterns in attached and waterborne microbiomes across three hydrologically-connected, physicochemically-distinct zones (inland hyporheic, nearshore hyporheic and river); (b) assembly processes that generated these patterns; (c) groups of organisms that corresponded to deterministic changes in the environment; and (d) correlations between these groups and hyporheic metabolism. All microbiomes remained dissimilar through time, but consistent presence of similar taxa suggested dispersal and/or common selective pressures among zones. Further, we demonstrated a pronounced impact of deterministic assembly in all microbiomes as well as seasonal shifts from heterotrophic to autotrophic microorganisms associated with increases in groundwater discharge. The abundance of one statistical cluster of organisms increased with active biomass and respiration, revealing organisms that may strongly influence hyporheic biogeochemistry. Based on our results, we propose a conceptualization of hyporheic zone metabolism in which increased organic carbon concentrations during surface water intrusion support heterotrophy, which succumbs to autotrophy under groundwater discharge. These results provide new opportunities to enhance microbially-explicit ecosystem models describing hyporheic zone biogeochemistry and its influence over riverine ecosystem function. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  6. Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.

    PubMed

    Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M

    2016-11-17

    A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.

  7. Probabilistic Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  8. Jitter and phase noise of ADPLL due to PSN with deterministic frequency

    NASA Astrophysics Data System (ADS)

    Deng, Xiaoying; Yang, Jun; Wu, Jianhui

    2011-09-01

    In this article, jitter and phase noise of all-digital phase-locked loop due to power supply noise (PSN) with deterministic frequency are analysed. It leads to the conclusion that jitter and phase noise heavily depend on the noise frequency. Compared with jitter, phase noise is much less affected by the deterministic PSN. Our method is utilised to study a CMOS ADPLL designed and simulated in SMIC 0.13 µm standard CMOS process. A comparison between the results obtained by our method and those obtained by simulation and measurement proves the accuracy of the predicted model. When the digital controlled oscillator was corrupted by PSN with 100 mVpk-pk, the measured jitters were 33.9 ps at the rate of fG = 192 MHz and 148.5 ps at the rate of fG = 40 MHz. However, the measured phase noise was exactly the same except for two impulses appearing at 192 and 40 MHz, respectively.

  9. Extended method of moments for deterministic analysis of stochastic multistable neurodynamical systems

    NASA Astrophysics Data System (ADS)

    Deco, Gustavo; Martí, Daniel

    2007-03-01

    The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.

  10. Nonclassical point of view of the Brownian motion generation via fractional deterministic model

    NASA Astrophysics Data System (ADS)

    Gilardi-Velázquez, H. E.; Campos-Cantón, E.

    In this paper, we present a dynamical system based on the Langevin equation without stochastic term and using fractional derivatives that exhibit properties of Brownian motion, i.e. a deterministic model to generate Brownian motion is proposed. The stochastic process is replaced by considering an additional degree of freedom in the second-order Langevin equation. Thus, it is transformed into a system of three first-order linear differential equations, additionally α-fractional derivative are considered which allow us to obtain better statistical properties. Switching surfaces are established as a part of fluctuating acceleration. The final system of three α-order linear differential equations does not contain a stochastic term, so the system generates motion in a deterministic way. Nevertheless, from the time series analysis, we found that the behavior of the system exhibits statistics properties of Brownian motion, such as, a linear growth in time of mean square displacement, a Gaussian distribution. Furthermore, we use the detrended fluctuation analysis to prove the Brownian character of this motion.

  11. Delay compensation in integrated communication and control systems. I - Conceptual development and analysis

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    A procedure for compensating for the effects of distributed network-induced delays in integrated communication and control systems (ICCS) is proposed. The problem of analyzing systems with time-varying and possibly stochastic delays could be circumvented by use of a deterministic observer which is designed to perform under certain restrictive but realistic assumptions. The proposed delay-compensation algorithm is based on a deterministic state estimator and a linear state-variable-feedback control law. The deterministic observer can be replaced by a stochastic observer without any structural modifications of the delay compensation algorithm. However, if a feedforward-feedback control law is chosen instead of the state-variable feedback control law, the observer must be modified as a conventional nondelayed system would be. Under these circumstances, the delay compensation algorithm would be accordingly changed. The separation principle of the classical Luenberger observer holds true for the proposed delay compensator. The algorithm is suitable for ICCS in advanced aircraft, spacecraft, manufacturing automation, and chemical process applications.

  12. Analysis of deterministic swapping of photonic and atomic states through single-photon Raman interaction

    NASA Astrophysics Data System (ADS)

    Rosenblum, Serge; Borne, Adrien; Dayan, Barak

    2017-03-01

    The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.

  13. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  14. Reports of Planetary Geology Program, 1982

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler)

    1982-01-01

    Work conducted in the Planetary Geology program is summarized. The following categories are presented: outer solar system satellites; asteroids and comets; Venus; cratering processes and landform development; volcanic processes and landforms; aolian processes and landforms; fluvial processes and landform development; periglacial and permafrost processes; structure, tectonics and stratigraphy; remote sensing and regolith studies; geologic mapping, cartography and geodesy.

  15. Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier

    PubMed Central

    Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong

    2017-01-01

    Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2–188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 (Cryptococcus terricola) increased with successional distance, whereas that of OTU65 (Tolypocladium tundrense) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and NH4+–N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario. PMID:28649234

  16. Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier.

    PubMed

    Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong

    2017-01-01

    Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2-188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 ( Cryptococcus terricola ) increased with successional distance, whereas that of OTU65 ( Tolypocladium tundrense ) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and [Formula: see text]-N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario.

  17. Predicting Lg Coda Using Synthetic Seismograms and Media With Stochastic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Tibuleac, I. M.; Stroujkova, A.; Bonner, J. L.; Mayeda, K.

    2005-12-01

    Recent examinations of the characteristics of coda-derived Sn and Lg spectra for yield estimation have shown that the spectral peak of Nevada Test Site (NTS) explosion spectra is depth-of-burial dependent, and that this peak is shifted to higher frequencies for Lop Nor explosions at the same depths. To confidently use coda-based yield formulas, we need to understand and predict coda spectral shape variations with depth, source media, velocity structure, topography, and geological heterogeneity. We present results of a coda modeling study to predict Lg coda. During the initial stages of this research, we have acquired and parameterized a deterministic 6 deg. x 6 deg. velocity and attenuation model centered on the Nevada Test Site. Near-source data are used to constrain density and attenuation profiles for the upper five km. The upper crust velocity profiles are quilted into a background velocity profile at depths greater than five km. The model is parameterized for use in a modified version of the Generalized Fourier Method in two dimensions (GFM2D). We modify this model to include stochastic heterogeneities of varying correlation lengths within the crust. Correlation length, Hurst number and fractional velocity perturbation of the heterogeneities are used to construct different realizations of the random media. We use nuclear explosion and earthquake cluster waveform analysis, as well as well log and geological information to constrain the stochastic parameters for a path between the NTS and the seismic stations near Mina, Nevada. Using multiple runs, we quantify the effects of variations in the stochastic parameters, of heterogeneity location in the crust and attenuation on coda amplitude and spectral characteristics. We calibrate these parameters by matching synthetic earthquake Lg coda envelopes to coda envelopes of local earthquakes with well-defined moments and mechanisms. We generate explosion synthetics for these calibrated deterministic and stochastic models. Secondary effects, including a compensated linear vector dipole source, are superposed on the synthetics in order to adequately characterize the Lg generation. We use this technique to characterize the effects of depth of burial on the coda spectral shapes.

  18. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  19. Stochastic Community Assembly: Does It Matter in Microbial Ecology?

    PubMed

    Zhou, Jizhong; Ning, Daliang

    2017-12-01

    Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.

  20. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  1. Using Snow to Teach Geology.

    ERIC Educational Resources Information Center

    Roth, Charles

    1991-01-01

    A lesson plan, directed at middle school students and older, describes using snow to study the geological processes of solidification of molten material, sedimentation, and metamorphosis. Provides background information on these geological processes. (MCO)

  2. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  3. Deterministic reshaping of single-photon spectra using cross-phase modulation.

    PubMed

    Matsuda, Nobuyuki

    2016-03-01

    The frequency conversion of light has proved to be a crucial technology for communication, spectroscopy, imaging, and signal processing. In the quantum regime, it also offers great potential for realizing quantum networks incorporating disparate physical systems and quantum-enhanced information processing over a large computational space. The frequency conversion of quantum light, such as single photons, has been extensively investigated for the last two decades using all-optical frequency mixing, with the ultimate goal of realizing lossless and noiseless conversion. I demonstrate another route to this target using frequency conversion induced by cross-phase modulation in a dispersion-managed photonic crystal fiber. Owing to the deterministic and all-optical nature of the process, the lossless and low-noise spectral reshaping of a single-photon wave packet in the telecommunication band has been readily achieved with a modulation bandwidth as large as 0.4 THz. I further demonstrate that the scheme is applicable to manipulations of a nonclassical frequency correlation, wave packet interference, and entanglement between two photons. This approach presents a new coherent frequency interface for photons for quantum information processing.

  4. Deterministic reshaping of single-photon spectra using cross-phase modulation

    PubMed Central

    Matsuda, Nobuyuki

    2016-01-01

    The frequency conversion of light has proved to be a crucial technology for communication, spectroscopy, imaging, and signal processing. In the quantum regime, it also offers great potential for realizing quantum networks incorporating disparate physical systems and quantum-enhanced information processing over a large computational space. The frequency conversion of quantum light, such as single photons, has been extensively investigated for the last two decades using all-optical frequency mixing, with the ultimate goal of realizing lossless and noiseless conversion. I demonstrate another route to this target using frequency conversion induced by cross-phase modulation in a dispersion-managed photonic crystal fiber. Owing to the deterministic and all-optical nature of the process, the lossless and low-noise spectral reshaping of a single-photon wave packet in the telecommunication band has been readily achieved with a modulation bandwidth as large as 0.4 THz. I further demonstrate that the scheme is applicable to manipulations of a nonclassical frequency correlation, wave packet interference, and entanglement between two photons. This approach presents a new coherent frequency interface for photons for quantum information processing. PMID:27051862

  5. Hands-on-Entropy, Energy Balance with Biological Relevance

    NASA Astrophysics Data System (ADS)

    Reeves, Mark

    2015-03-01

    Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is important contribution of the entropy in driving fundamental biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy). This has enabled students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce complex biological processes and structures in order model them mathematically to account for both deterministic and probabilistic processes. The students test these models in simulations and in laboratory experiments that are biologically relevant such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront random forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.

  6. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  7. Bayesian deterministic decision making: a normative account of the operant matching law and heavy-tailed reward history dependency of choices.

    PubMed

    Saito, Hiroshi; Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato

    2014-01-01

    The decision making behaviors of humans and animals adapt and then satisfy an "operant matching law" in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.

  8. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    NASA Astrophysics Data System (ADS)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  9. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  10. Modern Workflows for Fracture Rock Hydrogeology

    NASA Astrophysics Data System (ADS)

    Doe, T.

    2015-12-01

    Discrete Fracture Network (DFN) is a numerical simulation approach that represents a conducting fracture network using geologically realistic geometries and single-conductor hydraulic and transport properties. In terms of diffusion analogues, equivalent porous media derive from heat conduction in continuous media, while DFN simulation is more similar to electrical flow and diffusion in circuits with discrete pathways. DFN modeling grew out of pioneering work of David Snow in the late 1960s with additional impetus in the 1970's from the development of the development of stochastic approaches for describing of fracture geometric and hydrologic properties. Research in underground test facilities for radioactive waste disposal developed the necessary linkages between characterization technologies and simulation as well as bringing about a hybrid deterministic stochastic approach. Over the past 40 years DFN simulation and characterization methods have moved from the research environment into practical, commercial application. The key geologic, geophysical and hydrologic tools provide the required DFN inputs of conductive fracture intensity, orientation, and transmissivity. Flow logging either using downhole tool or by detailed packer testing identifies the locations of conducting features in boreholes, and image logging provides information on the geology and geometry of the conducting features. Multi-zone monitoring systems isolate the individual conductors, and with subsequent drilling and characterization perturbations help to recognize connectivity and compartmentalization in the fracture network. Tracer tests and core analysis provide critical information on the transport properties especially matrix diffusion unidentified conducting pathways. Well test analyses incorporating flow dimension boundary effects provide further constraint on the conducting geometry of the fracture network.

  11. Stochastic associative memory

    NASA Astrophysics Data System (ADS)

    Baumann, Erwin W.; Williams, David L.

    1993-08-01

    Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.

  12. An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling

    NASA Astrophysics Data System (ADS)

    Qiu, X. N.; Lau, H. Y. K.

    The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.

  13. Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J. S.; Pollicove, H.

    The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less

  14. Deterministic realization of collective measurements via photonic quantum walks.

    PubMed

    Hou, Zhibo; Tang, Jun-Feng; Shang, Jiangwei; Zhu, Huangjun; Li, Jian; Yuan, Yuan; Wu, Kang-Da; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2018-04-12

    Collective measurements on identically prepared quantum systems can extract more information than local measurements, thereby enhancing information-processing efficiency. Although this nonclassical phenomenon has been known for two decades, it has remained a challenging task to demonstrate the advantage of collective measurements in experiments. Here, we introduce a general recipe for performing deterministic collective measurements on two identically prepared qubits based on quantum walks. Using photonic quantum walks, we realize experimentally an optimized collective measurement with fidelity 0.9946 without post selection. As an application, we achieve the highest tomographic efficiency in qubit state tomography to date. Our work offers an effective recipe for beating the precision limit of local measurements in quantum state tomography and metrology. In addition, our study opens an avenue for harvesting the power of collective measurements in quantum information-processing and for exploring the intriguing physics behind this power.

  15. Data-driven gradient algorithm for high-precision quantum control

    NASA Astrophysics Data System (ADS)

    Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel

    2018-04-01

    In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.

  16. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  17. Stochastic empirical loading and dilution model for analysis of flows, concentrations, and loads of highway runoff constituents

    USGS Publications Warehouse

    Granato, Gregory E.; Jones, Susan C.

    2014-01-01

    In cooperation with FHWA, the U.S. Geological Survey developed the stochastic empirical loading and dilution model (SELDM) to supersede the 1990 FHWA runoff quality model. The SELDM tool is designed to transform disparate and complex scientific data into meaningful information about the adverse risks of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such measures for reducing such risks. The SELDM tool is easy to use because much of the information and data needed to run it are embedded in the model and obtained by defining the site location and five simple basin properties. Information and data from thousands of sites across the country were compiled to facilitate the use of the SELDM tool. A case study illustrates how to use the SELDM tool for conducting the types of sensitivity analyses needed to properly assess water quality risks. For example, the use of deterministic values to model upstream stormflows instead of representative variations in prestorm flow and runoff may substantially overestimate the proportion of highway runoff in downstream flows. Also, the risks for total phosphorus excursions are substantially affected by the selected criteria and the modeling methods used. For example, if a single deterministic concentration is used rather than a stochastic population of values to model upstream concentrations, then the percentage of water quality excursions in the downstream receiving waters may depend entirely on the selected upstream concentration.

  18. Identifying Flow Networks in a Karstified Aquifer by Application of the Cellular Automata-Based Deterministic Inversion Method (Lez Aquifer, France)

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Wang, X.; Jourde, H.; Lecoq, N.

    2017-12-01

    The distributed modeling of flow paths within karstic and fractured fields remains a complex task because of the high dependence of the hydraulic responses to the relative locations between observational boreholes and interconnected fractures and karstic conduits that control the main flow of the hydrosystem. The inverse problem in a distributed model is one alternative approach to interpret the hydraulic test data by mapping the karstic networks and fractured areas. In this work, we developed a Bayesian inversion approach, the Cellular Automata-based Deterministic Inversion (CADI) algorithm to infer the spatial distribution of hydraulic properties in a structurally constrained model. This method distributes hydraulic properties along linear structures (i.e., flow conduits) and iteratively modifies the structural geometry of this conduit network to progressively match the observed hydraulic data to the modeled ones. As a result, this method produces a conductivity model that is composed of a discrete conduit network embedded in the background matrix, capable of producing the same flow behavior as the investigated hydrologic system. The method is applied to invert a set of multiborehole hydraulic tests collected from a hydraulic tomography experiment conducted at the Terrieu field site in the Lez aquifer, Southern France. The emergent model shows a high consistency to field observation of hydraulic connections between boreholes. Furthermore, it provides a geologically realistic pattern of flow conduits. This method is therefore of considerable value toward an enhanced distributed modeling of the fractured and karstified aquifers.

  19. How Geoscience Novices Reason about Temporal Duration: The Role of Spatial Thinking and Large Numbers

    ERIC Educational Resources Information Center

    Cheek, Kim A.

    2013-01-01

    Research about geologic time conceptions generally focuses on the placement of events on the geologic timescale, with few studies dealing with the duration of geologic processes or events. Those studies indicate that students often have very poor conceptions about temporal durations of geologic processes, but the reasons for that are relatively…

  20. 30 CFR 251.11 - Submission, inspection, and selection of geological data and information collected under a permit...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... geological data and information collected under a permit and processed by permittees or third parties. 251.11... GEOLOGICAL AND GEOPHYSICAL (G&G) EXPLORATIONS OF THE OUTER CONTINENTAL SHELF § 251.11 Submission, inspection, and selection of geological data and information collected under a permit and processed by permittees...

  1. Beyond data collection in digital mapping: interpretation, sketching and thought process elements in geological map making

    NASA Astrophysics Data System (ADS)

    Watkins, Hannah; Bond, Clare; Butler, Rob

    2016-04-01

    Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching, is yet to find the digital flow that is achieved with pencil on notebook page or map. Free-form integrated sketching and notebook functionality in geological mapping software packages is in its nascence. Hence, the result is a tendency for digital geological mapping to focus on the ease of data collection rather than on the thoughts and careful observations that come from notebook sketching and interpreting boundaries on a map in the field. The final digital geological map can be assessed for when and where data was recorded, but the thought processes of the mapper are less easily assessed, and the use of observations and sketching to generate ideas and interpretations maybe inhibited by reliance on digital mapping methods. All mapping methods used have their own distinct advantages and disadvantages and with more recent technologies both hardware and software issues have arisen. We present field examples of using conventional fieldslip mapping, and compare these with more advanced technologies to highlight some of the main advantages and disadvantages of each method and discuss where geological mapping may be going in the future.

  2. Geological research for public outreach and education in Lithuania

    NASA Astrophysics Data System (ADS)

    Skridlaite, Grazina; Guobyte, Rimante

    2013-04-01

    Successful IYPE activities and implementation of Geoheritage day in Lithuania increased public awareness in geology. A series of projects introducing geology to the general public and youth, supported by EU funds and local communities, were initiated. Researchers from the scientific and applied geology institutions of Lithuania participated in these projects and provided with the geological data. In one case, the Lithuanian Survey of Protected Areas supported the installation of a series of geological exhibitions in several regional and national parks. An animation demonstrating glacial processes was chosen for most of these because the Lithuanian surface is largely covered with sedimentary deposits of the Nemunas (Weichselian) glaciation. Researchers from the Lithuanian Geological Survey used the mapping results to demonstrate real glacial processes for every chosen area. In another case, 3D models showing underground structures of different localities were based on detailed geological maps and profiles obtained for that area. In case of the Sartai regional park, the results of previous geological research projects provided the possibility to create a movie depicting the ca. 2 Ga geological evolution of the region. The movie starts with the accretion of volcanic island arcs on the earlier continental margin at ca. 2 Ga and deciphers later Precambrian tectonic and magmatic events. The reconstruction is based on numerous scientific articles and interpretation of geophysical data. Later Paleozoic activities and following erosion sculptured the surface which was covered with several ice sheets in Quaternary. For educational purpose, a collection of minerals and rocks at the Forestry Institute was used to create an exhibition called "Cycle of geological processes". Forestry scientists and their students are able to study the interactions of geodiversity and biodiversity and to understand ancient and modern geological processes leading to a soil formation. An aging exposition at the Museum of Erratic Boulders in NW Lithuania is being rearranged for educational purposes, to show the major rock types and their origins more clearly. A new exhibition is supplemented with computer portals presenting geological processes, geological quizzes, animations etc. Magmatism, metamorphism, sedimentation and other geological processes are demonstrated using erratic boulders brought by glaciers from Scandinavia and northern Russia. A part of the exhibition is devoted to glaciation processes and arrival of ice sheets to Lithuania. Visitors are able to examine large erratic boulder groups in a surrounding park and to enjoy beautiful environment. The exhibition also demonstrates mineral resources of Lithuania, different fossils and stones from a human body. In all cases it was recognised that a lack of geological information limits the use of geology for public outreach. Ongoing scientific research is essential in many places as well as a mediator's job for interpreting the results of highly specialised research results and to adapt them for public consumption.

  3. New figuring model based on surface slope profile for grazing-incidence reflective optics

    DOE PAGES

    Zhou, Lin; Huang, Lei; Bouet, Nathalie; ...

    2016-08-09

    Surface slope profile is widely used in the metrology of grazing-incidence reflective optics instead of surface height profile. Nevertheless, the theoretical and experimental model currently used in deterministic optical figuring processes is based on surface height, not on surface slope. This means that the raw slope profile data from metrology need to be converted to height profile to perform the current height-based figuring processes. The inevitable measurement noise in the raw slope data will introduce significant cumulative error in the resultant height profiles. As a consequence, this conversion will degrade the determinism of the figuring processes, and will have anmore » impact on the ultimate surface figuring results. To overcome this problem, an innovative figuring model is proposed, which directly uses the raw slope profile data instead of the usual height data as input for the deterministic process. In this article, first the influence of the measurement noise on the resultant height profile is analyzed, and then a new model is presented; finally a demonstration experiment is carried out using a one-dimensional ion beam figuring process to demonstrate the validity of our approach.« less

  4. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  5. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    PubMed Central

    2018-01-01

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site. PMID:29386401

  6. Effect of quantum noise on deterministic remote state preparation of an arbitrary two-particle state via various quantum entangled channels

    NASA Astrophysics Data System (ADS)

    Qu, Zhiguo; Wu, Shengyao; Wang, Mingming; Sun, Le; Wang, Xiaojun

    2017-12-01

    As one of important research branches of quantum communication, deterministic remote state preparation (DRSP) plays a significant role in quantum network. Quantum noises are prevalent in quantum communication, and it can seriously affect the safety and reliability of quantum communication system. In this paper, we study the effect of quantum noise on deterministic remote state preparation of an arbitrary two-particle state via different quantum channels including the χ state, Brown state and GHZ state. Firstly, the output states and fidelities of three DRSP algorithms via different quantum entangled channels in four noisy environments, including amplitude-damping, phase-damping, bit-flip and depolarizing noise, are presented, respectively. And then, the effects of noises on three kinds of preparation algorithms in the same noisy environment are discussed. In final, the theoretical analysis proves that the effect of noise in the process of quantum state preparation is only related to the noise type and the size of noise factor and independent of the different entangled quantum channels. Furthermore, another important conclusion is given that the effect of noise is also independent of how to distribute intermediate particles for implementing DRSP through quantum measurement during the concrete preparation process. These conclusions will be very helpful for improving the efficiency and safety of quantum communication in a noisy environment.

  7. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  8. Seismic Hazard Analysis — Quo vadis?

    NASA Astrophysics Data System (ADS)

    Klügel, Jens-Uwe

    2008-05-01

    The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.

  9. A surface wave reflector in Southwestern Japan

    NASA Astrophysics Data System (ADS)

    Mak, S.; Koketsu, K.; Miyake, H.; Obara, K.; Sekine, S.

    2009-12-01

    Surface waves at short periods (<35s) are affected severely by heterogeneities in the crust and the uppermost mantle. When the scale of heterogeneity is sufficiently large, its effect can be studied in a deterministic way using conventional concepts of reflection and refraction. A well-known example is surface wave refraction at continental margin. We present a case study to investigate the composition of surface wave coda in a deterministic approach. A long duration of surface wave coda with a predominant period of 20s is observed during various strong earthquakes around Japan. The coda shows an unambiguous propagation direction, implying a deterministic nature. Beamforming and particle motion analysis suggest that the surface wave later arrivals could be explained by Love wave reflections by a point reflector located at offshore southeast to Kyushu. The reflection demonstrates a seemingly incidence-independent favorable azimuth in emitting strength. In additional to beamforming, we use a new regional crustal velocity model to perform a grid-search ray-tracing with the assumption of point reflector to further constrain to location of coda generation. Because strong velocity anomalies exist near the zone of interest, we decide to use a network shortest-path ray-tracing method, instead of analytical methods like shooting and bending, to avoid the problems like convergence, shadow zone, and smooth model assumption. Two geological features are found to be related to the formation of the coda. The primary one is the intersection between the Kyushu-Palau Ridge and the Nankai Trough at offshore southeast to Kyushu (hereafter referred as "KPR-NT"), which may act as a point reflector. There is a strong Love wave phase velocity anomaly at KPR-NT but not other parts of the ridge, implying that topography is irrelevant. Rayleigh wave phase velocity does not experience a strong anomaly there, which is consistent to the absence of Rayleigh wave reflections implied by the observed particle motions. The secondary one is a low phase velocity (<2km/s for T=20s) at the accretionary wedge of the Nankai Trough due to the thick sediment. Such a long and narrow low velocity zone, with its southwest tip at KPR-NT, is a potential wave-guide to channel waves towards KPR-NT. The longer duration of deterministic later arrivals than the direct arrival is partially explained by multi-pathing due to the wave-guide. The surface wave coda is observable for earthquakes whose propagation path does not include the accretionary wedge, implying that the wedge is an enhancer but not indispensable of the formation of the observed coda.

  10. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  11. Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions

    NASA Astrophysics Data System (ADS)

    Valentine, John S.

    2013-09-01

    By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.

  12. An Overview of the National Weather Service National Water Model

    NASA Astrophysics Data System (ADS)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Feng, X.; Karsten, L. R.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.

    2016-12-01

    The National Weather Service (NWS) Office of Water Prediction (OWP), in conjunction with the National Center for Atmospheric Research (NCAR) and the NWS National Centers for Environmental Prediction (NCEP) recently implemented version 1.0 of the National Water Model (NWM) into operations. This model is an hourly cycling uncoupled analysis and forecast system that provides streamflow for 2.7 million river reaches and other hydrologic information on 1km and 250m grids. It will provide complementary hydrologic guidance at current NWS river forecast locations and significantly expand guidance coverage and type in underserved locations. The core of this system is the NCAR-supported community Weather Research and Forecasting (WRF)-Hydro hydrologic model. It ingests forcing from a variety of sources including Multi-Sensor Multi-Radar (MRMS) radar-gauge observed precipitation data and High Resolution Rapid Refresh (HRRR), Rapid Refresh (RAP), Global Forecast System (GFS) and Climate Forecast System (CFS) forecast data. WRF-Hydro is configured to use the Noah-Multi Parameterization (Noah-MP) Land Surface Model (LSM) to simulate land surface processes. Separate water routing modules perform diffusive wave surface routing and saturated subsurface flow routing on a 250m grid, and Muskingum-Cunge channel routing down National Hydrogaphy Dataset Plus V2 (NHDPlusV2) stream reaches. River analyses and forecasts are provided across a domain encompassing the Continental United States (CONUS) and hydrologically contributing areas, while land surface output is available on a larger domain that extends beyond the CONUS into Canada and Mexico (roughly from latitude 19N to 58N). The system includes an analysis and assimilation configuration along with three forecast configurations. These include a short-range 15 hour deterministic forecast, a medium-Range 10 day deterministic forecast and a long-range 30 day 16-member ensemble forecast. United Sates Geologic Survey (USGS) streamflow observations are assimilated into the analysis and assimilation configuration, and all four configurations benefit from the inclusion of 1,260 reservoirs. An overview of the National Water Model will be given, along with information on ongoing evaluation activities and plans for future NWM enhancements.

  13. Deterministic estimation of hydrological thresholds for shallow landslide initiation and slope stability models: case study from the Somma-Vesuvius area of southern Italy

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.

    2012-01-01

    Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is interrupted. These results lead to the identification of a comprehensive hydrogeomorphological model of susceptibility to initial landslides that links morphological, stratigraphical and hydrological conditions. The calculation of intensities and durations of rainfall necessary for slope instability allowed the identification of deterministic hydrological thresholds that account for uncertainty in properties and observed rainfall intensities.

  14. Groundwater in geologic processes, 2nd edition

    USGS Publications Warehouse

    Ingebritsen, Steven E.; Sanford, Ward E.; Neuzil, Christopher E.

    2006-01-01

    Interest in the role of Groundwater in Geologic Processes has increased steadily over the past few decades. Hydrogeologists and geologists are now actively exploring the role of groundwater and other subsurface fluids in such fundamental geologic processes as crustal heat transfer, ore deposition, hydrocarbon migration, earthquakes, tectonic deformation, diagenesis, and metamorphism.Groundwater in Geologic Processes is the first comprehensive treatment of this body of inquiry. Chapters 1 to 4 develop the basic theories of groundwater motion, hydromechanics, solute transport, and heat transport. Chapter 5 applies these theories to regional groundwater flow systems in a generic sense, and Chapters 6 to 13 focus on particular geologic processes and environments. Relative to the first edition of Groundwater in Geologic Processes , this second edition includes a much more comprehensive treatment of hydromechanics (the coupling of groundwater flow and deformation). It also includes new chapters on "compaction and diagenesis," "metamorphism," and "subsea hydrogeology." Finally, it takes advantage of the substantial body of published research that has appeared since the first edition in 1998. The systematic presentation of theory and application, and the problem sets that conclude each chapter, make this book ideal for undergraduate- and graduate-level geology courses (assuming that the students have some background in calculus and introductory chemistry). It also serves as an invaluable reference for researchers and other professionals in the field

  15. Assessing Seismic Hazards - Algorithms, Maps, and Emergency Scenarios

    NASA Astrophysics Data System (ADS)

    Ferriz, H.

    2007-05-01

    Public officials in charge of building codes, land use planning, and emergency response need sound estimates of seismic hazards. Sources may be well defined (e.g., active faults that have a surface trace) or diffuse (e.g., a subduction zone or a blind-thrust belt), but in both cases one can use a deterministic or worst-case scenario approach. For each scenario, a design earthquake is selected based on historic data or the known length of Holocene ruptures (as determined by geologic mapping). Horizontal ground accelerations (HGAs) can then be estimated at different distances from the earthquake epicenter using published attenuation relations (e.g., Seismological Res. Letters, v. 68, 1997) and estimates of the elastic properties of the substrate materials. No good algorithms are available to take into account reflection of elastic waves across other fault planes (e.g., a common effect in California, where there are many strands of the San Andreas fault), or amplification of waves in water-saturated alluvial and lacustrine basins (e.g., the Mexico City basin), but empirical relations can be developed by correlating historic damage patterns with predicted HGAs. The ultimate result is a map of HGAs. With this map, and with additional data on depth to groundwater and geotechnical properties of local soils, a liquefaction susceptibility map can be prepared, using published algorithms (e.g., J. of Geotech. Geoenv. Eng., v. 127, p. 817-833, 2001; Eng. Geology Practice in N. California, p. 579-594, 2001). Finally, the HGA estimates, digital elevation models, geologic structural data, and geotechnical properties of local geologic units can be used to prepare a slope failure susceptibility map (e.g., Eng. Geology Practice in N. California, p. 77-94, 2001). Seismic hazard maps are used by: (1) Building officials to determine areas of the city where special construction codes have to be implemented, and where existing buildings may need to be retrofitted. (2) Planning officials to evaluate plans for new growth (though in most cities land use patterns are historically established). (3) Emergency response officials to plan emergency operations. (4) Insurance commissioners to estimate losses and insurance claims (e.g., with FEMA's software HAZUS).

  16. Generalization of Faustmann's Formula for Stochastic Forest Growth and Prices with Markov Decision Process Models

    Treesearch

    Joseph Buongiorno

    2001-01-01

    Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The...

  17. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  18. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  19. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  20. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  1. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  2. Development of DCGLs by using both probabilistic and deterministic analyses in RESRAD (onsite) and RESRAD-OFFSITE codes.

    PubMed

    Kamboj, Sunita; Yu, Charley; Johnson, Robert

    2013-05-01

    The Derived Concentration Guideline Levels for two building areas previously used in waste processing and storage at Argonne National Laboratory were developed using both probabilistic and deterministic radiological environmental pathway analysis. Four scenarios were considered. The two current uses considered were on-site industrial use and off-site residential use with farming. The two future uses (i.e., after an institutional control period of 100 y) were on-site recreational use and on-site residential use with farming. The RESRAD-OFFSITE code was used for the current-use off-site residential/farming scenario and RESRAD (onsite) was used for the other three scenarios. Contaminants of concern were identified from the past operations conducted in the buildings and the actual characterization done at the site. Derived Concentration Guideline Levels were developed for all four scenarios using deterministic and probabilistic approaches, which include both "peak-of-the-means" and "mean-of-the-peaks" analyses. The future-use on-site residential/farming scenario resulted in the most restrictive Derived Concentration Guideline Levels for most radionuclides.

  3. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  4. Data as a Service: A Seismic Web Service Pipeline

    NASA Astrophysics Data System (ADS)

    Martinez, E.

    2016-12-01

    Publishing data as a service pipeline provides an improved, dynamic approach over static data archives. A service pipeline is a collection of micro web services that each perform a specific task and expose the results of that task. Structured request/response formats allow micro web services to be chained together into a service pipeline to provide more complex results. The U.S. Geological Survey adopted service pipelines to publish seismic hazard and design data supporting both specific and generalized audiences. The seismic web service pipeline starts at source data and exposes probability and deterministic hazard curves, response spectra, risk-targeted ground motions, and seismic design provision metadata. This pipeline supports public/private organizations and individual engineers/researchers. Publishing data as a service pipeline provides a variety of benefits. Exposing the component services enables advanced users to inspect or use the data at each processing step. Exposing a composite service enables new users quick access to published data with a very low barrier to entry. Advanced users may re-use micro web services by chaining them in new ways or injecting new micros services into the pipeline. This allows the user to test hypothesis and compare their results to published results. Exposing data at each step in the pipeline enables users to review and validate the data and process more quickly and accurately. Making the source code open source, per USGS policy, further enables this transparency. Each micro service may be scaled independent of any other micro service. This ensures data remains available and timely in a cost-effective manner regardless of load. Additionally, if a new or more efficient approach to processing the data is discovered, this new approach may replace the old approach at any time, keeping the pipeline running while not affecting other micro services.

  5. Progress in ion figuring large optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, L.N.

    1995-12-31

    Ion figuring is an optical fabrication method that provides deterministic surface figure error correction of previously polished surfaces by using a directed, inert and neutralized ion beam to physically sputter material from the optic surface. Considerable process development has been completed and numerous large optical elements have been successfully final figured using this process. The process has been demonstrated to be highly deterministic, capable of completing complex-shaped optical element configurations in only a few process iterations, and capable of achieving high-quality surface figure accuracy`s. A review of the neutral ion beam figuring process will be provided, along with discussion ofmore » processing results for several large optics. Most notably, processing of Keck 10 meter telescope primary mirror segments and correction of one other large optic where a convergence ratio greater than 50 was demonstrated during the past year will be discussed. Also, the process has been demonstrated on various optical materials, including fused silica, ULE, zerodur, silicon and chemically vapor deposited (CVD) silicon carbide. Where available, results of surface finish changes caused by the ion bombardment process will be discussed. Most data have shown only limited degradation of the optic surface finish, and that it is generally a function of the quality of mechanical polishing prior to ion figuring. Removals of from 5 to 10 {mu}m on some materials are acceptable without adversely altering the surface finish specularity.« less

  6. The USGS role in mapping the nation's submerged lands

    USGS Publications Warehouse

    Schwab, Bill; Haines, John

    2004-01-01

    The seabed provides habitat for a diverse marine life having commercial, recreational, and intrinsic value. The habitat value of the seabed is largely a function of the geological structure and related geological, biological, oceanologic, and geochemical processes. Of equal importance, the nation's submerged lands contain energy and mineral resources and are utilized for the siting of offshore infrastructure and waste disposal. Seabed character and processes influence the safety and viability of offshore operations. Seabed and subseabed characterization is a prerequisite for the assessment, protection, and utilization of both living and non-living marine resources. A comprehensive program to characterize and understand the nation's submerged lands requires scientific expertise in the fields of geology, biology, hydrography, and oceanography. The U.S. Geological Survey (USGS) has long experience as the Federal agency charged with conducting geologic research and mapping in both coastal and offshore regions. The USGS Coastal and Marine Geology Program (CMGP) leads the nation in expertise related to characterization of seabed and subseabed geology, geological processes, seabed dynamics, and (in collaboration with the National Oceanic and Atmospheric Administration (NOAA) and international partners) habitat geoscience. Numerous USGS studies show that sea-floor geology and processes determine the character and distribution of biological habitats, control coastal evolution, influence the coastal response to storm events and human alterations, and determine the occurrence and concentration of natural resources.

  7. Reports of planetary geology program, 1983

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler)

    1984-01-01

    Several areas of the Planetary Geology Program were addressed including outer solar system satellites, asteroids, comets, Venus, cratering processes and landform development, volcanic processes, aeolian processes, fluvial processes, periglacial and permafrost processes, geomorphology, remote sensing, tectonics and stratigraphy, and mapping.

  8. Boise Hydrogeophysical Research Site: Control Volume/Test Cell and Community Research Asset

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Bradford, J.; Malama, B.

    2008-12-01

    The Boise Hydrogeophysical Research Site (BHRS) is a research wellfield or field-scale test facility developed in a shallow, coarse, fluvial aquifer with the objectives of supporting: (a) development of cost- effective, non- or minimally-invasive quantitative characterization and imaging methods in heterogeneous aquifers using hydrologic and geophysical techniques; (b) examination of fundamental relationships and processes at multiple scales; (c) testing theories and models for groundwater flow and solute transport; and (d) educating and training of students in multidisciplinary subsurface science and engineering. The design of the wells and the wellfield support modular use and reoccupation of wells for a wide range of single-well, cross-hole, multiwell and multilevel hydrologic, geophysical, and combined hydrologic-geophysical experiments. Efforts to date by Boise State researchers and collaborators have been largely focused on: (a) establishing the 3D distributions of geologic, hydrologic, and geophysical parameters which can then be used as the basis for jointly inverting hard and soft data to return the 3D K distribution and (b) developing subsurface measurement and imaging methods including tomographic characterization and imaging methods. At this point the hydrostratigraphic framework of the BHRS is known to be a hierarchical multi-scale system which includes layers and lenses that are recognized with geologic, hydrologic, radar, seismic, and EM methods; details are now emerging which may allow 3D deterministic characterization of zones and/or material variations at the meter scale in the central wellfield. Also the site design and subsurface framework have supported a variety of testing configurations for joint hydrologic and geophysical experiments. Going forward we recognize the opportunity to increase the R&D returns from use of the BHRS with additional infrastructure (especially for monitoring the vadose zone and surface water-groundwater interactions), more collaborative activity, and greater access to site data. Our broader goal of becoming more available as a research asset for the scientific community also supports the long-term business plan of increasing funding opportunities to maintain and operate the site.

  9. Simulations of hydrologic response in the Apalachicola-Chattahoochee-Flint River Basin, Southeastern United States

    USGS Publications Warehouse

    LaFontaine, Jacob H.; Jones, L. Elliott; Painter, Jaime A.

    2017-12-29

    A suite of hydrologic models has been developed for the Apalachicola-Chattahoochee-Flint River Basin (ACFB) as part of the National Water Census, a U.S. Geological Survey research program that focuses on developing new water accounting tools and assessing water availability and use at the regional and national scales. Seven hydrologic models were developed using the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, process-based system that simulates the effects of precipitation, temperature, land cover, and water use on basin hydrology. A coarse-resolution PRMS model was developed for the entire ACFB, and six fine-resolution PRMS models were developed for six subbasins of the ACFB. The coarse-resolution model was loosely coupled with a groundwater model to better assess the effects of water use on streamflow in the lower ACFB, a complex geologic setting with karst features. The PRMS coarse-resolution model was used to provide inputs of recharge to the groundwater model, which in turn provide simulations of groundwater flow that were aggregated with PRMS-based simulations of surface runoff and shallow-subsurface flow. Simulations without the effects of water use were developed for each model for at least the calendar years 1982–2012 with longer periods for the Potato Creek subbasin (1942–2012) and the Spring Creek subbasin (1952–2012). Water-use-affected flows were simulated for 2008–12. Water budget simulations showed heterogeneous distributions of precipitation, actual evapotranspiration, recharge, runoff, and storage change across the ACFB. Streamflow volume differences between no-water-use and water-use simulations were largest along the main stem of the Apalachicola and Chattahoochee River Basins, with streamflow percentage differences largest in the upper Chattahoochee and Flint River Basins and Spring Creek in the lower Flint River Basin. Water-use information at a shorter time step and a fully coupled simulation in the lower ACFB may further improve water availability estimates and hydrologic simulations in the basin.

  10. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography

    NASA Astrophysics Data System (ADS)

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  11. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography.

    PubMed

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  12. Plenary: Progress in Regional Landslide Hazard Assessment—Examples from the USA

    USGS Publications Warehouse

    Baum, Rex L.; Schulz, William; Brien, Dianne L.; Burns, William J.; Reid, Mark E.; Godt, Jonathan W.

    2014-01-01

    Landslide hazard assessment at local and regional scales contributes to mitigation of landslides in developing and densely populated areas by providing information for (1) land development and redevelopment plans and regulations, (2) emergency preparedness plans, and (3) economic analysis to (a) set priorities for engineered mitigation projects and (b) define areas of similar levels of hazard for insurance purposes. US Geological Survey (USGS) research on landslide hazard assessment has explored a range of methods that can be used to estimate temporal and spatial landslide potential and probability for various scales and purposes. Cases taken primarily from our work in the U.S. Pacific Northwest illustrate and compare a sampling of methods, approaches, and progress. For example, landform mapping using high-resolution topographic data resulted in identification of about four times more landslides in Seattle, Washington, than previous efforts using aerial photography. Susceptibility classes based on the landforms captured 93 % of all historical landslides (all types) throughout the city. A deterministic model for rainfall infiltration and shallow landslide initiation, TRIGRS, was able to identify locations of 92 % of historical shallow landslides in southwest Seattle. The potentially unstable areas identified by TRIGRS occupied only 26 % of the slope areas steeper than 20°. Addition of an unsaturated infiltration model to TRIGRS expands the applicability of the model to areas of highly permeable soils. Replacement of the single cell, 1D factor of safety with a simple 3D method of columns improves accuracy of factor of safety predictions for both saturated and unsaturated infiltration models. A 3D deterministic model for large, deep landslides, SCOOPS, combined with a three-dimensional model for groundwater flow, successfully predicted instability in steep areas of permeable outwash sand and topographic reentrants. These locations are consistent with locations of large, deep, historically active landslides. For an area in Seattle, a composite of the three maps illustrates how maps produced by different approaches might be combined to assess overall landslide potential. Examples from Oregon, USA, illustrate how landform mapping and deterministic analysis for shallow landslide potential have been adapted into standardized methods for efficiently producing detailed landslide inventory and shallow landslide susceptibility maps that have consistent content and format statewide.

  13. Modelling uncertainties in the diffusion-advection equation for radon transport in soil using interval arithmetic.

    PubMed

    Chakraverty, S; Sahoo, B K; Rao, T D; Karunakar, P; Sapra, B K

    2018-02-01

    Modelling radon transport in the earth crust is a useful tool to investigate the changes in the geo-physical processes prior to earthquake event. Radon transport is modeled generally through the deterministic advection-diffusion equation. However, in order to determine the magnitudes of parameters governing these processes from experimental measurements, it is necessary to investigate the role of uncertainties in these parameters. Present paper investigates this aspect by combining the concept of interval uncertainties in transport parameters such as soil diffusivity, advection velocity etc, occurring in the radon transport equation as applied to soil matrix. The predictions made with interval arithmetic have been compared and discussed with the results of classical deterministic model. The practical applicability of the model is demonstrated through a case study involving radon flux measurements at the soil surface with an accumulator deployed in steady-state mode. It is possible to detect the presence of very low levels of advection processes by applying uncertainty bounds on the variations in the observed concentration data in the accumulator. The results are further discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Deterministic or Probabilistic - Robustness or Resilience: How to Respond to Climate Change?

    NASA Astrophysics Data System (ADS)

    Plag, H.; Earnest, D.; Jules-Plag, S.

    2013-12-01

    Our response to climate change is dominated by a deterministic approach that emphasizes the interaction between only the natural and the built environment. But in the non-ergodic world of unprecedented climate change, social factors drive recovery from unforeseen Black Swans much more than natural or built ones. Particularly the sea level rise discussion focuses on deterministic predictions, accounting for uncertainties in major driving processes with a set of forcing scenarios and public deliberations on which of the plausible trajectories is most likely. Science focuses on the prediction of future climate change, and policies focus on mitigation of both climate change itself and its impacts. The deterministic approach is based on two basic assumptions: 1) Climate change is an ergodic process; 2) The urban coast is a robust system. Evidence suggests that these assumptions may not hold. Anthropogenic changes are pushing key parameters of the climate system outside of the natural range of variability from the last 1 Million years, creating the potential for environmental Black Swans. A probabilistic approach allows for non-ergodic processes and focuses more on resilience, hence does not depend on the two assumptions. Recent experience with hurricanes revealed threshold limitations of the built environment of the urban coast, which, once exceeded, brought to the forefront the importance of the social fabric and social networking in evaluating resilience. Resilience strongly depends on social capital, and building social capital that can create resilience must be a key element in our response to climate change. Although social capital cannot mitigate hazards, social scientists have found that communities rich in strong norms of cooperation recover more quickly than communities without social capital. There is growing evidence that the built environment can affect the social capital of a community, for example public health and perceptions of public safety. This suggests an intriguing hypothesis: disaster risk reduction programs need to account for whether they also facilitate the public trust, cooperation, and communication needed to recover from a disaster. Our work in the Hampton Roads area, where the probability of hazardous flooding and inundation events exceeding the thresholds of the infrastructure is high, suggests that to facilitate the paradigm shift from the deterministic to a probabilistic approach, natural sciences have to focus on hazard probabilities, while engineering and social sciences have to work together to understand how interactions of the built and social environments impact robustness and resilience. The current science-policy relationship needs to be augmented by social structures that can learn from previous unexpected events. In this response to climate change, science does not have the primary goal to reduce uncertainties and prediction errors, but rather to develop processes that can utilize uncertainties and surprises to increase robustness, strengthen resilience, and reduce fragility of the social systems during times when infrastructure fails.

  15. Recent Geologic Mapping Results for the Polar Regions of Mars

    NASA Technical Reports Server (NTRS)

    tanaka, K. L.; Kolb, E. J.

    2008-01-01

    The polar regions of Mars include the densest data coverage for the planet because of the polar orbits of MGS, ODY, and MEX. Because the geology of the polar plateaus has been among the most dynamic on the planet in recent geologic time, the data enable the most detailed and complex geologic investigations of any regions on Mars, superseding previous, even recent, mapping efforts [e.g., 1-3]. Geologic mapping at regional and local scales is revealing that the stratigraphy and modificational histories of polar materials by various processes are highly complex at both poles. Here, we describe some of our recent results in polar geologic mapping and how they address the geologic processes involved and implications for polar climate history.

  16. VASP-4096: a very high performance programmable device for digital media processing applications

    NASA Astrophysics Data System (ADS)

    Krikelis, Argy

    2001-03-01

    Over the past few years, technology drivers for microprocessors have changed significantly. Media data delivery and processing--such as telecommunications, networking, video processing, speech recognition and 3D graphics--is increasing in importance and will soon dominate the processing cycles consumed in computer-based systems. This paper presents the architecture of the VASP-4096 processor. VASP-4096 provides high media performance with low energy consumption by integrating associative SIMD parallel processing with embedded microprocessor technology. The major innovations in the VASP-4096 is the integration of thousands of processing units in a single chip that are capable of support software programmable high-performance mathematical functions as well as abstract data processing. In addition to 4096 processing units, VASP-4096 integrates on a single chip a RISC controller that is an implementation of the SPARC architecture, 128 Kbytes of Data Memory, and I/O interfaces. The SIMD processing in VASP-4096 implements the ASProCore architecture, which is a proprietary implementation of SIMD processing, operates at 266 MHz with program instructions issued by the RISC controller. The device also integrates a 64-bit synchronous main memory interface operating at 133 MHz (double-data rate), and a 64- bit 66 MHz PCI interface. VASP-4096, compared with other processors architectures that support media processing, offers true performance scalability, support for deterministic and non-deterministic data processing on a single device, and software programmability that can be re- used in future chip generations.

  17. Stochastic switching in biology: from genotype to phenotype

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-03-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.

  18. Behind the Scenery.

    ERIC Educational Resources Information Center

    Scanlon, Andrew, Ed.; And Others

    Knowledge of the physiographic evolution of the Tasmanian landscape is still very far from complete; however, all aspects of the landscape are governed by definable processes acting on the rock medley which is the heritage of Tasmania's geological history. This book explains Tasmania's landforms and geology in terms of geologic processes. Chapters…

  19. Cognitive Developmental Biology: History, Process and Fortune's Wheel

    ERIC Educational Resources Information Center

    Balaban, Evan

    2006-01-01

    Biological contributions to cognitive development continue to be conceived predominantly along deterministic lines, with proponents of different positions arguing about the preponderance of gene-based versus experience-based influences that organize brain circuits irreversibly during prenatal or early postnatal life, and evolutionary influences…

  20. Path Selection in the Growth of Wormholes

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Bruns, Stefan; Stipp, Susan; Sørensen, Henning

    2017-04-01

    Spontaneous growth of wormholes in natural porous media often leads to generation of highly complex flow systems with fractal morphologies. Despite extensive investigations, the underpinning mechanism for path selection during wormholing remains elusive. Here we introduce the concept of cumulative surface (CS) and show that the trajectory of a growing wormhole is one with minimum CS. Theoretical analysis shows that the CS determines the position of the dissolution front. We then show, using numerical simulation based on greyscale data of the fine grained carbonate rock chalk, that the tip of an advancing pore always follows the migration of the most far reaching dissolution front determined from the CS. The predicted dissolution behavior was verified by experimental observation of wormhole growth in chalk using in situ microtomography. The results suggest that wormholing is deterministic in nature rather than stochastic. This insight sheds light on engineering of artificial flow systems in geologic formations by exploiting self-organization in natural porous materials.

  1. Application of a process-based shallow landslide hazard model over a broad area in Central Italy

    USGS Publications Warehouse

    Gioia, Eleonora; Speranza, Gabriella; Ferretti, Maurizio; Godt, Jonathan W.; Baum, Rex L.; Marincioni, Fausto

    2015-01-01

    Process-based models are widely used for rainfall-induced shallow landslide forecasting. Previous studies have successfully applied the U.S. Geological Survey’s Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability (TRIGRS) model (Baum et al. 2002) to compute infiltration-driven changes in the hillslopes’ factor of safety on small scales (i.e., tens of square kilometers). Soil data input for such models are difficult to obtain across larger regions. This work describes a novel methodology for the application of TRIGRS over broad areas with relatively uniform hydrogeological properties. The study area is a 550-km2 region in Central Italy covered by post-orogenic Quaternary sediments. Due to the lack of field data, we assigned mechanical and hydrological property values through a statistical analysis based on literature review of soils matching the local lithologies. We calibrated the model using rainfall data from 25 historical rainfall events that triggered landslides. We compared the variation of pressure head and factor of safety with the landslide occurrence to identify the best fitting input conditions. Using calibrated inputs and a soil depth model, we ran TRIGRS for the study area. Receiver operating characteristic (ROC) analysis, comparing the model’s output with a shallow landslide inventory, shows that TRIGRS effectively simulated the instability conditions in the post-orogenic complex during historical rainfall scenarios. The implication of this work is that rainfall-induced landslides over large regions may be predicted by a deterministic model, even where data on geotechnical and hydraulic properties as well as temporal changes in topography or subsurface conditions are not available.

  2. Publications - RI 2000-1B | Alaska Division of Geological & Geophysical

    Science.gov Websites

    ; Formations; Fossils; Geologic; Geologic Map; Geology; Glacial Processes; Kemik Sandstone; Marine; Marine ; Tectonics; Tertiary; Trace Fossils; Turbidites; Volcanic Ash Top of Page Department of Natural Resources

  3. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  4. Significant achievements in the planetary geology program, 1981

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Editor)

    1981-01-01

    Recent developments in planetology research as reported at the 1981 NASA Planetary Geology Principal Investigators meeting are summarized. The evolution of the solar system, comparative planetology, and geologic processes active on other planets are considered. Galilean satellites and small bodies, Venus, geochemistry and regoliths, volcanic and aeolian processes and landforms, fluvial and periglacial processes, and planetary impact cratering, remote sensing, and cartography are discussed.

  5. Dynamic Routing of Aircraft in the Presence of Adverse Weather Using a POMDP Framework

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Roychoudhury, Indranil; Spirkovska, Lilly; Sankararaman, Shankar; Kulkarni, Chetan; Arnon, Tomer

    2017-01-01

    Each year weather-related airline delays result in hundreds of millions of dollars in additional fuel burn, maintenance, and lost revenue, not to mention passenger inconvenience. The current approaches for aircraft route planning in the presence of adverse weather still mainly rely on deterministic methods. In contrast, this work aims to deal with the problem using a Partially Observable Markov Decision Processes (POMDPs) framework, which allows for reasoning over uncertainty (including uncertainty in weather evolution over time) and results in solutions that are more robust to disruptions. The POMDP-based decision support system is demonstrated on several scenarios involving convective weather cells and is benchmarked against a deterministic planning system with functionality similar to those currently in use or under development.

  6. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  7. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yen Ting; Buchler, Nicolas E.

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  8. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE PAGES

    Lin, Yen Ting; Buchler, Nicolas E.

    2018-01-31

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  9. Oregon Hazard Explorer for Lifelines Program (OHELP): A web-based geographic information system tool for assessing potential Cascadia earthquake hazard

    NASA Astrophysics Data System (ADS)

    Sharifi Mood, M.; Olsen, M. J.; Gillins, D. T.; Javadnejad, F.

    2016-12-01

    The Cascadia Subduction Zone (CSZ) has the ability to generate earthquake as powerful as 9 moment magnitude creating great amount of damage to structures and facilities in Oregon. Series of deterministic earthquake analysis are performed for M9.0, M8.7, M8.4 and M8.1 presenting persistent, long lasting shaking associated with other geological threats such as ground shaking, landslides, liquefaction-induced ground deformations, fault rupture vertical displacement, tsunamis, etc. These ground deformation endangers urban structures, foundations, bridges, roadways, pipelines and other lifelines. Lifeline providers in Oregon, including private and public practices responsible for transportation, electric and gas utilities, water and wastewater, fuel, airports, and harbors face an aging infrastructure that was built prior to a full understanding of this extreme seismic risk. As recently experienced in Chile and Japan, a three to five minutes long earthquake scenario, expected in Oregon, necessities a whole different method of risk mitigation for these major lifelines than those created for shorter shakings from crustal earthquakes. A web-based geographic information system tool is developed to fully assess the potential hazard from the multiple threats impending from Cascadia subduction zone earthquakes in the region. The purpose of this website is to provide easy access to the latest and best available hazard information over the web, including work completed in the recent Oregon Resilience Plan (ORP) (OSSPAC, 2013) and other work completed by the Department of Geology and Mineral Industries (DOGAMI) and the United States Geological Survey (USGS). As a result, this tool is designated for engineers, planners, geologists, and others who need this information to help make appropriate decisions despite the fact that this web-GIS tool only needs minimal knowledge of GIS to work with.

  10. Application of the precipitation-runoff model in the Warrior coal field, Alabama

    USGS Publications Warehouse

    Kidd, Robert E.; Bossong, C.R.

    1987-01-01

    A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.

  11. Tsunami Hazard Assessment of the Northern Oregon Coast: A Multi-Deterministic Approach Tested at Cannon Beach, Oregon

    NASA Astrophysics Data System (ADS)

    Priest, G. R.; Goldfinger, C.; Wang, K.; Witter, R. C.; Zhang, Y.; Baptista, A.

    2008-12-01

    To update the tsunami hazard assessment method for Oregon, we (1) evaluate geologically reasonable variability of the earthquake rupture process on the Cascadia megathrust, (2) compare those scenarios to geological and geophysical evidence for plate locking, (3) specify 25 deterministic earthquake sources, and (4) use the resulting vertical coseismic deformations as initial conditions for simulation of Cascadia tsunami inundation at Cannon Beach, Oregon. Because of the Cannon Beach focus, the north-south extent of source scenarios is limited to Neah Bay, Washington to Florence, Oregon. We use the marine paleoseismic record to establish recurrence bins from the 10,000 year event record and select representative coseismic slips from these data. Assumed slips on the megathrust are 8.4 m (290 yrs of convergence), 15.2 m (525 years of convergence), 21.6 m (748 years of convergence), and 37.5 m (1298 years of convergence) which, if the sources were extended to the entire Cascadia margin, give Mw varying from approximately 8.3 to 9.3. Additional parameters explored by these scenarios characterize ruptures with a buried megathrust versus splay faulting, local versus regional slip patches, and seaward skewed versus symmetrical slip distribution. By assigning variable weights to the 25 source scenarios using a logic tree approach, we derived percentile inundation lines that express the confidence level (percentage) that a Cascadia tsunami will NOT exceed the line. Lines of 50, 70, 90, and 99 percent confidence correspond to maximum runup of 8.9, 10.5, 13.2, and 28.4 m (NAVD88). The tsunami source with highest logic tree weight (preferred scenario) involved rupture of a splay fault with 15.2 m slip that produced tsunami inundation near the 70 percent confidence line. Minimum inundation consistent with the inland extent of three Cascadia tsunami sand layers deposited east of Cannon Beach within the last 1000 years suggests a minimum of 15.2 m slip on buried megathrust ruptures. The largest tsunami run-up at the 99 percent isoline was from 37.5 m slip partitioned to a splay fault. This type of extreme event is considered to be very rare, perhaps once in 10,000 years based on offshore paleoseismic evidence, but it can produce waves rivaling the 2004 Indian Ocean tsunami. Cascadia coseismic deformation most similar to the Indian Ocean earthquake produced generally smaller tsunamis than at the Indian Ocean due mostly to the 1 km shallower water depth on the Cascadia margin. Inundation from distant tsunami sources was assessed by simulation of only two Mw 9.2 earthquakes in the Gulf of Alaska, a hypothetical worst-case developed by the Tsunami Pilot Study Working Group (2006) and a historical worst case, the 1964 Prince William Sound Earthquake; maximum runups were, respectively, 12.4 m and 7.5 m.

  12. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  13. Model of the material removal function and an experimental study on a magnetorheological finishing process using a small ball-end permanent-magnet polishing head.

    PubMed

    Chen, Mingjun; Liu, Henan; Cheng, Jian; Yu, Bo; Fang, Zhen

    2017-07-01

    In order to achieve the deterministic finishing of optical components with concave surfaces of a curvature radius less than 10 mm, a novel magnetorheological finishing (MRF) process using a small ball-end permanent-magnet polishing head with a diameter of 4 mm is introduced. The characteristics of material removal in the proposed MRF process are studied. The model of the material removal function for the proposed MRF process is established based on the three-dimensional hydrodynamics analysis and Preston's equation. The shear stress on the workpiece surface is calculated by means of resolving the presented mathematical model using a numerical solution method. The analysis result reveals that the material removal in the proposed MRF process shows a positive dependence on shear stress. Experimental research is conducted to investigate the effect of processing parameters on the material removal rate and improve the surface accuracy of a typical rotational symmetrical optical component. The experimental results show that the surface accuracy of the finished component of K9 glass material has been improved to 0.14 μm (PV) from the initial 0.8 μm (PV), and the finished surface roughness Ra is 0.0024 μm. It indicates that the proposed MRF process can be used to achieve the deterministic removal of surface material and perform the nanofinishing of small curvature radius concave surfaces.

  14. A probabilistic approach for channel initiation

    Treesearch

    Erkan Istanbulluoglu; David G. Tarboton; Robert T. Pack; Charles H. Luce

    2002-01-01

    The channel head represents an important transition point from hillslope to fluvial processes. There is a nonlinear threshold transition across the channel head with sediment transport much larger in channels than on hillslopes. Deterministic specific catchment area, a, thresholds for channel initiation, sometimes dependent on slope, S...

  15. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  16. Analytical approximations for spatial stochastic gene expression in single cells and tissues

    PubMed Central

    Smith, Stephen; Cianci, Claudia; Grima, Ramon

    2016-01-01

    Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686

  17. Decision-making and evacuation planning for flood risk management in the Netherlands.

    PubMed

    Kolen, Bas; Helsloot, Ira

    2014-07-01

    A traditional view of decision-making for evacuation planning is that, given an uncertain threat, there is a deterministic way of defining the best decision. In other words, there is a linear relation between threat, decision, and execution consequences. Alternatives and the impact of uncertainties are not taken into account. This study considers the 'top strategic decision-making' for mass evacuation owing to flooding in the Netherlands. It reveals that the top strategic decision-making process itself is probabilistic because of the decision-makers involved and their crisis managers (as advisers). The paper concludes that deterministic planning is not sufficient, and it recommends probabilistic planning that considers uncertainties in the decision-making process itself as well as other uncertainties, such as forecasts, citizens responses, and the capacity of infrastructure. This results in less optimistic, but more realistic, strategies and a need to pay attention to alternative strategies. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.

  18. Usefulness of multiqubit W-type states in quantum information processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, P.; Adhikari, S.; Kumar, A., E-mail: atulk@iitj.ac.in

    We analyze the efficiency of multiqubit W-type states as resources for quantum information. For this, we identify and generalize four-qubit W-type states. Our results show that these states can be used as resources for deterministic quantum information processing. The utility of results, however, is limited by the availability of experimental setups to perform and distinguish multiqubit measurements. We therefore emphasize protocols where two users want to establish an optimal bipartite entanglement using the partially entangled W-type states. We find that for such practical purposes, four-qubit W-type states can be a better resource in comparison to three-qubit W-type states. For amore » dense coding protocol, our states can be used deterministically to send two bits of classical message by locally manipulating a single qubit. In addition, we also propose a realistic experimental method to prepare the four-qubit W-type states using standard unitary operations and weak measurements.« less

  19. Stochastic Optimization for Unit Commitment-A Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Qipeng P.; Wang, Jianhui; Liu, Andrew L.

    2015-07-01

    Optimization models have been widely used in the power industry to aid the decision-making process of scheduling and dispatching electric power generation resources, a process known as unit commitment (UC). Since UC's birth, there have been two major waves of revolution on UC research and real life practice. The first wave has made mixed integer programming stand out from the early solution and modeling approaches for deterministic UC, such as priority list, dynamic programming, and Lagrangian relaxation. With the high penetration of renewable energy, increasing deregulation of the electricity industry, and growing demands on system reliability, the next wave ismore » focused on transitioning from traditional deterministic approaches to stochastic optimization for unit commitment. Since the literature has grown rapidly in the past several years, this paper is to review the works that have contributed to the modeling and computational aspects of stochastic optimization (SO) based UC. Relevant lines of future research are also discussed to help transform research advances into real-world applications.« less

  20. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  1. A deterministic and stochastic velocity model for the Salton Trough/Basin and Range transition zone and constraints on magmatism during rifting

    NASA Astrophysics Data System (ADS)

    Larkin, Steven P.; Levander, Alan; Okaya, David; Goff, John A.

    1996-12-01

    As a high resolution addition to the 1992 Pacific to Arizona Crustal Experiment (PACE), a 45-km-long deep crustal seismic reflection profile was acquired across the Chocolate Mountains in southeastern California to illuminate crustal structure in the transition between the Salton Trough and the Basin and Range province. The complex seismic data are analyzed for both large-scale (deterministic) and fine-scale (stochastic) crustal features. A low-fold near-offset common-midpoint (CMP) stacked section shows the northeastward lateral extent of a high-velocity lower crustal body which is centered beneath the Salton Trough. Off-end shots record a high-amplitude diffraction from the point where the high velocity lower crust pinches out at the Moho. Above the high-velocity lower crust, moderate-amplitude reflections occur at midcrustal levels. These reflections display the coherency and frequency characteristics of reflections backscattered from a heterogeneous velocity field, which we model as horizontal intrusions with a von Kármán (fractal) distribution. The effects of upper crustal scattering are included by combining the mapped surface geology and laboratory measurements of exposed rocks within the Chocolate Mountains to reproduce the upper crustal velocity heterogeneity in our crustal velocity model. Viscoelastic finite difference simulations indicate that the volume of mafic material within the reflective zone necessary to produce the observed backscatter is about 5%. The presence of wavelength-scale heterogeneity within the near-surface, upper, and middle crust also produces a 0.5-s-thick zone of discontinuous reflections from a crust-mantle interface which is actually a first-order discontinuity.

  2. Simulation of earthquake ground motions in the eastern United States using deterministic physics‐based and site‐based stochastic approaches

    USGS Publications Warehouse

    Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos

    2017-01-01

    Earthquake ground‐motion recordings are scarce in the central and eastern United States (CEUS) for large‐magnitude events and at close distances. We use two different simulation approaches, a deterministic physics‐based method and a site‐based stochastic method, to simulate ground motions over a wide range of magnitudes. Drawing on previous results for the modeling of recordings from the 2011 Mw 5.8 Mineral, Virginia, earthquake and using the 2001 Mw 7.6 Bhuj, India, earthquake as a tectonic analog for a large magnitude CEUS event, we are able to calibrate the two simulation methods over this magnitude range. Both models show a good fit to the Mineral and Bhuj observations from 0.1 to 10 Hz. Model parameters are then adjusted to obtain simulations for Mw 6.5, 7.0, and 7.6 events in the CEUS. Our simulations are compared with the 2014 U.S. Geological Survey weighted combination of existing ground‐motion prediction equations in the CEUS. The physics‐based simulations show comparable response spectral amplitudes and a fairly similar attenuation with distance. The site‐based stochastic simulations suggest a slightly faster attenuation of the response spectral amplitudes with distance for larger magnitude events and, as a result, slightly lower amplitudes at distances greater than 200 km. Both models are plausible alternatives and, given the few available data points in the CEUS, can be used to represent the epistemic uncertainty in modeling of postulated CEUS large‐magnitude events.

  3. Introductory Geology From the Liberal Arts Approach: A Geology-Sociology Linked Course

    NASA Astrophysics Data System (ADS)

    Walsh, E. O.; Davis, E.

    2008-12-01

    Geology can be a hard sell to college students, especially to college students attending small, liberal arts institutions in localities that lack exaggerated topography. At these schools, Geology departments that wish to grow must work diligently to attract students to the major; professors must be able to convince a wider audience of students that geology is relevant to their everyday lives. Toward this end, a Physical Geology course was linked with an introductory Sociology course through the common theme of Consumption. The same students took the two courses in sequence, beginning with the Sociology course and ending with Physical Geology; thus, students began by discussing the role of consumption in society and ended by learning about the geological processes and implications of consumption. Students were able to ascertain the importance of geology in their daily lives by connecting Earth processes to specific products they consume, such as cell phones and bottled water. Students were also able to see the connection between seemingly disparate fields of study, which is a major goal of the liberal arts. As a theme, Consumption worked well to grab the attention of students interested in diverse issues, such as environmental science or social justice. A one-hour lecture illustrating the link between sociology and geology was developed for presentation to incoming freshmen and their parents to advertise the course. Initial response has been positive, showing an increase in awareness of geological processes among students with a wide range of interests.

  4. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  5. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  6. The role of deterministic analyses and seismotectonic data in earthquake risk assessment, Istanbul, Turkey.

    NASA Astrophysics Data System (ADS)

    Pondard, Nicolas; Armijo, Rolando; King, Geoffrey C. P.; Meyer, Bertrand; Ucarkus, Gulsen

    2010-05-01

    Seismotectonic methods allowing quantitative measures of the frequency and severity of earthquakes have greatly advanced over the last 30 years, aided by high-resolution imagery, digital topography and modern techniques for dating. During the same period, deterministic models based on the physics of earthquakes (Coulomb stress interactions) have been extensively developed to explain the distribution of earthquakes in space and time. Seismotectonic data and Coulomb Stress models provide valuable information on seismic hazard and could assist the public policy, disaster risk management and financial risk transfer communities to make more informed decisions around their strategic planning and risk management activities. The Sea of Marmara and Istanbul regions (North Anatolian Fault, NAF) are among the most appropriate on Earth to analyse seismic hazard, because reliable data covers almost completely two seismic cycles (the past ~500 years). Earthquake ruptures associated with historical events have been found in the direct vicinity of the city, on the Marmara sea floor. The MARMARASCARPS cruise using an unmanned submersible (ROV) provides direct observations to study the morphology and geology of those ruptures, their distribution and geometry. These observations are crucial to quantify the magnitude of past earthquakes along the submarine fault system (e.g. 1894, 1912, 1999, M > 7). In particular, the identification of a break continuous over 60 km with a right-lateral slip of 5 m, corresponding probably to the offshore extension of the Ganos earthquake rupture (1912, Ms 7.4), modifies substantially our understanding of the current state of loading along the NAF next to Istanbul. Coulomb stress analysis is used to characterise loading evolution in well-identified fault segments, including secular loading from below and lateral loading imposed by the occurrence of previous earthquakes. The 20th century earthquake sequence in the region of Istanbul is modelled using geological and geophysical records. For the 18th century M≥7.0 earthquake clusters, we construct scenarios consistent with the tectonic and historical data. Coulomb stress modeling including the 20th and 18th century historical events shows a current zone of maximum loading along a 70 km long strike-slip segment, south-west of Istanbul, with at least 4-5 m of slip deficit. That segment alone would be capable of generating a large magnitude earthquake (Mw 7.2). Other segments in Marmara appear less loaded.

  7. First principles pulse pile-up balance equation and fast deterministic solution

    NASA Astrophysics Data System (ADS)

    Sabbatucci, Lorenzo; Fernández, Jorge E.

    2017-08-01

    Pulse pile-up (PPU) is an always present effect which introduces a distortion into the spectrum measured with radiation detectors and that worsen with the increasing emission rate of the radiation source. It is fully ascribable to the pulse handling circuitry of the detector and it is not comprised in the detector response function which is well explained by a physical model. The PPU changes both the number and the height of the recorded pulses, which are related, respectively, with the number of detected particles and their energy. In the present work, it is derived a first principles balance equation for second order PPU to obtain a post-processing correction to apply to X-ray measurements. The balance equation is solved for the particular case of rectangular pulse shape using a deterministic iterative procedure for which it will be shown the convergence. The proposed method, deterministic rectangular PPU (DRPPU), requires minimum amount of information and, as example, it is applied to a solid state Si detector with active or off-line PPU suppression circuitry. A comparison shows that the results obtained with this fast and simple approach are comparable to those from the more sophisticated procedure using precise detector pulse shapes.

  8. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less

  9. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  10. PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.

    1998-01-01

    PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.

  11. Accessing the dark exciton spin in deterministic quantum-dot microlenses

    NASA Astrophysics Data System (ADS)

    Heindel, Tobias; Thoma, Alexander; Schwartz, Ido; Schmidgall, Emma R.; Gantz, Liron; Cogan, Dan; Strauß, Max; Schnauber, Peter; Gschrey, Manuel; Schulze, Jan-Hindrik; Strittmatter, Andre; Rodt, Sven; Gershoni, David; Reitzenstein, Stephan

    2017-12-01

    The dark exciton state in semiconductor quantum dots (QDs) constitutes a long-lived solid-state qubit which has the potential to play an important role in implementations of solid-state-based quantum information architectures. In this work, we exploit deterministically fabricated QD microlenses which promise enhanced photon extraction, to optically prepare and read out the dark exciton spin and observe its coherent precession. The optical access to the dark exciton is provided via spin-blockaded metastable biexciton states acting as heralding states, which are identified by deploying polarization-sensitive spectroscopy as well as time-resolved photon cross-correlation experiments. Our experiments reveal a spin-precession period of the dark exciton of (0.82 ± 0.01) ns corresponding to a fine-structure splitting of (5.0 ± 0.7) μeV between its eigenstates |↑ ⇑ ±↓ ⇓ ⟩. By exploiting microlenses deterministically fabricated above pre-selected QDs, our work demonstrates the possibility to scale up implementations of quantum information processing schemes using the QD-confined dark exciton spin qubit, such as the generation of photonic cluster states or the realization of a solid-state-based quantum memory.

  12. The periodic structure of the natural record, and nonlinear dynamics.

    USGS Publications Warehouse

    Shaw, H.R.

    1987-01-01

    This paper addresses how nonlinear dynamics can contribute to interpretations of the geologic record and evolutionary processes. Background is given to explain why nonlinear concepts are important. A resume of personal research is offered to illustrate why I think nonlinear processes fit with observations on geological and cosmological time series data. The fabric of universal periodicity arrays generated by nonlinear processes is illustrated by means of a simple computer mode. I conclude with implications concerning patterns of evolution, stratigraphic boundary events, and close correlations of major geologically instantaneous events (such as impacts or massive volcanic episodes) with any sharply defined boundary in the geologic column. - from Author

  13. Geologic process studies using Synthetic Aperture Radar (SAR) data

    NASA Technical Reports Server (NTRS)

    Evans, Diane L.

    1992-01-01

    The use of SAR data to study geologic processes for better understanding of recent tectonic activity and climate change as well as the mitigation of geologic hazards and exploration for nonrenewable resources is discussed. The geologic processes that are particularly amenable to SAR-based data include volcanism; soil erosion, degradation, and redistribution; coastal erosion and inundation; glacier fluctuations; permafrost; and crustal motions. When SAR data are combined with data from other planned spaceborne sensors including ESA ERS, the Japanese Earth Resources Satellite, and the Canadian Radarsat, it will be possible to build a time-series view of temporal changes over many regions of earth.

  14. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  15. Field Geology/Processes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert

    1996-01-01

    The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.

  16. Look before you build; geologic studies for safer land development in the San Francisco Bay area

    USGS Publications Warehouse

    Blair-Tyler, Martha

    1995-01-01

    This Circular provides a general description of the types of geologic hazards that exist throughout the United States. In nontechnical language this book describes how geologic information can be incorporated in the land-use development process and contains useful discussion of several examples from the San Francisco Bay area and elsewhere in the United States of how geologic information is already being used in the development process by some cities and counties.

  17. Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Savran, William Harvey

    High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.

  18. Optimal Estimation with Two Process Models and No Measurements

    DTIC Science & Technology

    2015-08-01

    models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an

  19. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  20. Implicit Three-Dimensional Geo-Modelling Based on HRBF Surface

    NASA Astrophysics Data System (ADS)

    Gou, J.; Zhou, W.; Wu, L.

    2016-10-01

    Three-dimensional (3D) geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D) geological elements remains difficult and time-consuming. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF) to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata) were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. For extended applications in 3D modelling of other kinds of geo-objects, mining ore body models and urban geotechnical engineering stratum models were constructed by this method from drill-hole data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data.

  1. Venus geology and tectonics - Hotspot and crustal spreading models and questions for the Magellan mission

    NASA Technical Reports Server (NTRS)

    Head, James W.; Crumpler, L. S.

    1990-01-01

    Spacecraft and ground-based observations of Venus have revealed a geologically young and active surface - with volcanoes, rift zones, orogenic belts and evidence for hotspots and crustal spreading - yet the processes responsible for these features cannot be identified from the available data. The Magellan spacecraft will acquire an unprecedented global data set which will provide a comprehensive and well resolved view of the planet. This will permit global geological mapping, an assessment of the style and relative importance of geological processes, and will help in the understanding of links between the surface geology and mantle dynamics of this earth-like planet.

  2. Reports of Planetary Geology Program, 1981

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler)

    1981-01-01

    Abstracts of 205 reports from Principal investigators of NASA's Planetary Geology Program succinctly summarize work conducted and reflect the significant accomplishments. The entries are arranged under the following topics: (1) Saturnian satellites; (2) asteroids, comets and Galilean satellites; (3) cratering processes and landform development; (4) volcanic processes and landforms; (5) Aerolian processes and landforms; (6) fluvial, preglacial, and other processes of landform development; (7) Mars polar deposits, volatiles, and climate; (8) structure, tectonics, and stratigraphy; (9) remote sensing and regolith chemistry; (10) cartography and geologic mapping; and (11) special programs.

  3. Tectonic and climatic considerations for deep geological disposal of radioactive waste: A UK perspective.

    PubMed

    McEvoy, F M; Schofield, D I; Shaw, R P; Norris, S

    2016-11-15

    Identifying and evaluating the factors that might impact on the long-term integrity of a deep Geological Disposal Facility (GDF) and its surrounding geological and surface environment is central to developing a safety case for underground disposal of radioactive waste. The geological environment should be relatively stable and its behaviour adequately predictable so that scientifically sound evaluations of the long-term radiological safety of a GDF can be made. In considering this, it is necessary to take into account natural processes that could affect a GDF or modify its geological environment up to 1millionyears into the future. Key processes considered in this paper include those which result from plate tectonics, such as seismicity and volcanism, as well as climate-related processes, such as erosion, uplift and the effects of glaciation. Understanding the inherent variability of process rates, critical thresholds and likely potential influence of unpredictable perturbations represent significant challenges to predicting the natural environment. From a plate-tectonic perspective, a one million year time frame represents a very short segment of geological time and is largely below the current resolution of observation of past processes. Similarly, predicting climate system evolution on such time-scales, particularly beyond 200ka AP is highly uncertain, relying on estimating the extremes within which climate and related processes may vary with reasonable confidence. The paper highlights some of the challenges facing a deep geological disposal program in the UK to review understanding of the natural changes that may affect siting and design of a GDF. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  4. Enhancing the photon-extraction efficiency of site-controlled quantum dots by deterministically fabricated microlenses

    NASA Astrophysics Data System (ADS)

    Kaganskiy, Arsenty; Fischbach, Sarah; Strittmatter, André; Rodt, Sven; Heindel, Tobias; Reitzenstein, Stephan

    2018-04-01

    We report on the realization of scalable single-photon sources (SPSs) based on single site-controlled quantum dots (SCQDs) and deterministically fabricated microlenses. The fabrication process comprises the buried-stressor growth technique complemented with low-temperature in-situ electron-beam lithography for the integration of SCQDs into microlens structures with high yield and high alignment accuracy. The microlens-approach leads to a broadband enhancement of the photon-extraction efficiency of up to (21 ± 2)% and a high suppression of multi-photon events with g (2)(τ = 0) < 0.06 without background subtraction. The demonstrated combination of site-controlled growth of QDs and in-situ electron-beam lithography is relevant for arrays of efficient SPSs which, can be applied in photonic quantum circuits and advanced quantum computation schemes.

  5. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  6. A stochastic tabu search algorithm to align physician schedule with patient flow.

    PubMed

    Niroumandrad, Nazgol; Lahrichi, Nadia

    2018-06-01

    In this study, we consider the pretreatment phase for cancer patients. This is defined as the period between the referral to a cancer center and the confirmation of the treatment plan. Physicians have been identified as bottlenecks in this process, and the goal is to determine a weekly cyclic schedule that improves the patient flow and shortens the pretreatment duration. High uncertainty is associated with the arrival day, profile and type of cancer of each patient. We also include physician satisfaction in the objective function. We present a MIP model for the problem and develop a tabu search algorithm, considering both deterministic and stochastic cases. Experiments show that our method compares very well to CPLEX under deterministic conditions. We describe the stochastic approach in detail and present a real application.

  7. Geology Field Trips as Performance Evaluations

    ERIC Educational Resources Information Center

    Bentley, Callan

    2009-01-01

    One of the most important goals the author has for students in his introductory-level physical geology course is to give them the conceptual skills for solving geologic problems on their own. He wants students to leave his course as individuals who can use their knowledge of geologic processes and logic to figure out the extended geologic history…

  8. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  9. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  10. Reports of Planetary Geology and Geophysics Program, 1984

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler); Watters, T. R. (Compiler)

    1985-01-01

    Topics include outer planets and satellites; asteroids and comets; Venus; lunar origin and solar dynamics; cratering process; planetary interiors, petrology, and geochemistry; volcanic processes; aeolian processes and landforms; fluvial processes; geomorphology; periglacial and permafrost processes; remote sensing and regolith studies; structure, tectonics, and stratigraphy; geological mapping, cartography, and geodesy; and radar applications.

  11. Signal Processing Applications Of Wigner-Ville Analysis

    NASA Astrophysics Data System (ADS)

    Whitehouse, H. J.; Boashash, B.

    1986-04-01

    The Wigner-Ville distribution (WVD), a form of time-frequency analysis, is shown to be useful in the analysis of a variety of non-stationary signals both deterministic and stochastic. The properties of the WVD are reviewed and alternative methods of calculating the WVD are discussed. Applications are presented.

  12. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 1 EXPOSURE MODELING

    EPA Science Inventory

    Exposure to contaminants originating in the domestic water supply is influenced by a number of factors, including human activities, water use behavior, and physical and chemical processes. The key role of human activities is very apparent in exposure related to volatile water-...

  13. 76 FR 72220 - Incorporation of Risk Management Concepts in Regulatory Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-22

    ... and support the adoption of improved designs or processes. \\1\\ A deterministic approach to regulation... longstanding goal to move toward more risk-informed, performance- based approaches in its regulatory programs... regulatory approach that would continue to ensure the safe and secure use of nuclear material. As part of...

  14. Determinism versus Creativity: Which Way for Social Work?

    ERIC Educational Resources Information Center

    Peile, Colin

    1993-01-01

    Contends that dominant cosmology within social work is determinism. Argues for creative cosmology that can synthesize deterministic and random processes. Sees this development made possible by reconceptualization of relative nature of time. Discussion is grounded in relation to small example of social work practice, and implications of creative…

  15. A General Cognitive Diagnosis Model for Expert-Defined Polytomous Attributes

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy

    2013-01-01

    Polytomous attributes, particularly those defined as part of the test development process, can provide additional diagnostic information. The present research proposes the polytomous generalized deterministic inputs, noisy, "and" gate (pG-DINA) model to accommodate such attributes. The pG-DINA model allows input from substantive experts…

  16. Identifiability Of Systems With Modeling Errors

    NASA Technical Reports Server (NTRS)

    Hadaegh, Yadolah " fred" ; Bekey, George A.

    1988-01-01

    Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.

  17. Determinants of Clay and Shale Microfabric Signatures: Processes and Mechanisms

    DTIC Science & Technology

    1991-01-01

    macroenvironments . The interplay of geological, chemical, and biological processes and mechanisms during transport, deposition, and burial of...and macroenviron - a function of processes and mechanisms, are antecedent to gain- ments. The interplay of geological, chemical, and biological pro

  18. Utility indifference pricing of insurance catastrophe derivatives.

    PubMed

    Eichler, Andreas; Leobacher, Gunther; Szölgyenyi, Michaela

    2017-01-01

    We propose a model for an insurance loss index and the claims process of a single insurance company holding a fraction of the total number of contracts that captures both ordinary losses and losses due to catastrophes. In this model we price a catastrophe derivative by the method of utility indifference pricing. The associated stochastic optimization problem is treated by techniques for piecewise deterministic Markov processes. A numerical study illustrates our results.

  19. Changes of soil prokaryotic communities after clear-cutting in a karst forest: evidences for cutting-based disturbance promoting deterministic processes.

    PubMed

    Zhang, Xiao; Liu, Shirong; Li, Xiangzhen; Wang, Jingxin; Ding, Qiong; Wang, Hui; Tian, Chao; Yao, Minjie; An, Jiaxing; Huang, Yongtao

    2016-03-01

    To understand the temporal responses of soil prokaryotic communities to clear-cutting disturbance, we examined the changes in soil bacterial and archaeal community composition, structure and diversity along a chronosequence of forest successional restoration using high-throughput 16S rRNA gene sequencing. Our results demonstrated that clear-cutting significantly altered soil bacterial community structure, while no significant shifts of soil archaeal communities were observed. The hypothesis that soil bacterial communities would become similar to those of surrounding intact primary forest with natural regeneration was supported by the shifts in the bacterial community composition and structure. Bacterial community diversity patterns induced by clear-cutting were consistent with the intermediate disturbance hypothesis. Dynamics of bacterial communities was mostly driven by soil properties, which collectively explained more than 70% of the variation in bacterial community composition. Community assembly data revealed that clear-cutting promoted the importance of the deterministic processes in shaping bacterial communities, coinciding with the resultant low resource environments. But assembly processes in the secondary forest returned a similar level compared to the intact primary forest. These findings suggest that bacterial community dynamics may be predictable during the natural recovery process. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  1. Ecological Drivers of Biogeographic Patterns of Soil Archaeal Community

    PubMed Central

    Zheng, Yuan-Ming; Cao, Peng; Fu, Bojie; Hughes, Jane M.; He, Ji-Zheng

    2013-01-01

    Knowledge about the biogeography of organisms has long been a focus in ecological research, including the mechanisms that generate and maintain diversity. In this study, we targeted a microbial group relatively underrepresented in the microbial biogeographic literature, the soil Archaea. We surveyed the archaeal abundance and community composition using real-time quantitative PCR and T-RFLP approaches for 105 soil samples from 2 habitat types to identify the archaeal distribution patterns and factors driving these patterns. Results showed that the soil archaeal community was affected by spatial and environmental variables, and 79% and 51% of the community variation was explained in the non-flooded soil (NS) and flooded soil (FS) habitat, respectively, showing its possible biogeographic distribution. The diversity patterns of soil Archaea across the landscape were influenced by a combination of stochastic and deterministic processes. The contribution from neutral processes was higher than that from deterministic processes associated with environmental variables. The variables pH, sample depth and longitude played key roles in determining the archaeal distribution in the NS habitat, while sampling depth, longitude and NH4 +-N were most important in the FS habitat. Overall, there might be similar ecological drivers in the soil archaeal community as in macroorganism communities. PMID:23717418

  2. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  3. Brine flow in heated geologic salt.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlman, Kristopher L.; Malama, Bwalya

    This report is a summary of the physical processes, primary governing equations, solution approaches, and historic testing related to brine migration in geologic salt. Although most information presented in this report is not new, we synthesize a large amount of material scattered across dozens of laboratory reports, journal papers, conference proceedings, and textbooks. We present a mathematical description of the governing brine flow mechanisms in geologic salt. We outline the general coupled thermal, multi-phase hydrologic, and mechanical processes. We derive these processes governing equations, which can be used to predict brine flow. These equations are valid under a wide varietymore » of conditions applicable to radioactive waste disposal in rooms and boreholes excavated into geologic salt.« less

  4. On the effect of the 3-D regional geology on the seismic design of critical structures: the case of the Kashiwazaki-Kariwa Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Gatti, F.; Lopez-Caballero, F.; Clouteau, D.; Paolucci, R.

    2018-05-01

    In this study, numerical investigation is performed on a realistic source-to-site earthquake scenario, with the aim to assess the role of complex 3-D geological structures on the predicted wavefield. With this respect, the paper pointedly targets the seismic response of nuclear power plants in near-field conditions and the verification of some simplified assumptions commonly adopted for earthquake ground motion prediction and site effects analysis. To this purpose, the Kashiwazaki-Kariwa Nuclear Power Plant (Japan) is assumed as reference case-study. In 2007, the nuclear site and its surroundings were struck by the Niigata-Ken Chūetsu-Oki seismic sequence, which caused some of the peak ground motion design limits to be largely overpassed. The dense observation network deployed at the site recorded a highly incoherent and impulsive earthquake ground motion. Many studies argued that the intricate syncline-anticline geology lying underneath the nuclear facility was highly responsible of the observed seismic response. Therefore, a physics-based numerical model of the epicentral area is built-up (≈60 km wide) and tested for small aftershocks, so to discount the effect of extended source on the synthetic site-response. The numerical model (based on the Spectral Element Method) reproduces the source-to-site wave propagation by embracing the effects of the surface topography along with the presence of the Japan Sea (i.e. the bathymetry, the coastline and the fluid-solid interaction). Broad-band (0-5 Hz) synthetic waveforms are obtained for two different aftershocks, located at the two opposite sides of the nuclear facility, aiming to assess the influence of the incidence angle the radiated wave field impinges the foldings beneath it. The effect of the folding presence is assessed by comparing it to a subhorizontally layered geology, in terms of numerical outcome, and by highlighting the differences with respect to the observations. The presence of an intricate geology effectively unveils the reason behind the observed ground motion spatial variability within a relatively small area, stressing its crucial role to properly reproduce the modification the wavefield undergoes during its propagation path towards the surface. The accuracy of the numerical exercise is discussed along with its results, to show the high-fidelity of these deterministic earthquake ground motion predictions.

  5. Taking a gamble or playing by the rules: Dissociable prefrontal systems implicated in probabilistic versus deterministic rule-based decisions

    PubMed Central

    Bhanji, Jamil P.; Beer, Jennifer S.; Bunge, Silvia A.

    2014-01-01

    A decision may be difficult because complex information processing is required to evaluate choices according to deterministic decision rules and/or because it is not certain which choice will lead to the best outcome in a probabilistic context. Factors that tax decision making such as decision rule complexity and low decision certainty should be disambiguated for a more complete understanding of the decision making process. Previous studies have examined the brain regions that are modulated by decision rule complexity or by decision certainty but have not examined these factors together in the context of a single task or study. In the present functional magnetic resonance imaging study, both decision rule complexity and decision certainty were varied in comparable decision tasks. Further, the level of certainty about which choice to make (choice certainty) was varied separately from certainty about the final outcome resulting from a choice (outcome certainty). Lateral prefrontal cortex, dorsal anterior cingulate cortex, and bilateral anterior insula were modulated by decision rule complexity. Anterior insula was engaged more strongly by low than high choice certainty decisions, whereas ventromedial prefrontal cortex showed the opposite pattern. These regions showed no effect of the independent manipulation of outcome certainty. The results disambiguate the influence of decision rule complexity, choice certainty, and outcome certainty on activity in diverse brain regions that have been implicated in decision making. Lateral prefrontal cortex plays a key role in implementing deterministic decision rules, ventromedial prefrontal cortex in probabilistic rules, and anterior insula in both. PMID:19781652

  6. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  7. Origins of Sinuous and Braided Channels on Ascraeus Mons, Mars — A Keck Geology Consortium Undergraduate Research Project

    NASA Astrophysics Data System (ADS)

    de Wet, A. P.; Bleacher, J. E.; Garry, W. B.

    2012-03-01

    This Keck Geology Consortium project, involving four undergrad geology students, mapped and analyzed sinuous channel features on Ascraeus Mons, Mars, to better understand the role of volcanic and fluvial processes in the geological evolution of Mars.

  8. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  9. From Random Walks to Brownian Motion, from Diffusion to Entropy: Statistical Principles in Introductory Physics

    NASA Astrophysics Data System (ADS)

    Reeves, Mark

    2014-03-01

    Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is dominant contribution of the entropy in driving important biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy) that enable students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce seemingly complex biological processes and structures to be described by tractable models that include deterministic processes and simple probabilistic inference. The students test these models in simulations and in laboratory experiments that are biologically relevant. The students are challenged to bridge the gap between statistical parameterization of their data (mean and standard deviation) and simple model-building by inference. This allows the students to quantitatively describe realistic cellular processes such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront ``random'' forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.

  10. Health benefits of geologic materials and geologic processes

    USGS Publications Warehouse

    Finkelman, R.B.

    2006-01-01

    The reemerging field of Medical Geology is concerned with the impacts of geologic materials and geologic processes on animal and human health. Most medical geology research has been focused on health problems caused by excess or deficiency of trace elements, exposure to ambient dust, and on other geologically related health problems or health problems for which geoscience tools, techniques, or databases could be applied. Little, if any, attention has been focused on the beneficial health effects of rocks, minerals, and geologic processes. These beneficial effects may have been recognized as long as two million years ago and include emotional, mental, and physical health benefits. Some of the earliest known medicines were derived from rocks and minerals. For thousands of years various clays have been used as an antidote for poisons. "Terra sigillata," still in use today, may have been the first patented medicine. Many trace elements, rocks, and minerals are used today in a wide variety of pharmaceuticals and health care products. There is also a segment of society that believes in the curative and preventative properties of crystals (talismans and amulets). Metals and trace elements are being used in some of today's most sophisticated medical applications. Other recent examples of beneficial effects of geologic materials and processes include epidemiological studies in Japan that have identified a wide range of health problems (such as muscle and joint pain, hemorrhoids, burns, gout, etc.) that may be treated by one or more of nine chemically distinct types of hot springs, and a study in China indicating that residential coal combustion may be mobilizing sufficient iodine to prevent iodine deficiency disease. ?? 2006 MDPI. All rights reserved.

  11. Environmental Filtering Process Has More Important Roles than Dispersal Limitation in Shaping Large-Scale Prokaryotic Beta Diversity Patterns of Grassland Soils.

    PubMed

    Cao, Peng; Wang, Jun-Tao; Hu, Hang-Wei; Zheng, Yuan-Ming; Ge, Yuan; Shen, Ju-Pei; He, Ji-Zheng

    2016-07-01

    Despite the utmost importance of microorganisms in maintaining ecosystem functioning and their ubiquitous distribution, our knowledge of the large-scale pattern of microbial diversity is limited, particularly in grassland soils. In this study, the microbial communities of 99 soil samples spanning over 3000 km across grassland ecosystems in northern China were investigated using high-throughput sequencing to analyze the beta diversity pattern and the underlying ecological processes. The microbial communities were dominated by Proteobacteria, Actinobacteria, Acidobacteria, Chloroflexi, and Planctomycetes across all the soil samples. Spearman's correlation analysis indicated that climatic factors and soil pH were significantly correlated with the dominant microbial taxa, while soil microbial richness was positively linked to annual precipitation. The environmental divergence-dissimilarity relationship was significantly positive, suggesting the importance of environmental filtering processes in shaping soil microbial communities. Structural equation modeling found that the deterministic process played a more important role than the stochastic process on the pattern of soil microbial beta diversity, which supported the predictions of niche theory. Partial mantel test analysis have showed that the contribution of independent environmental variables has a significant effect on beta diversity, while independent spatial distance has no such relationship, confirming that the deterministic process was dominant in structuring soil microbial communities. Overall, environmental filtering process has more important roles than dispersal limitation in shaping microbial beta diversity patterns in the grassland soils.

  12. Abstracts for the Planetary Geology Field Conference on Aeolian Processes

    NASA Technical Reports Server (NTRS)

    Greeley, R. (Editor); Black, D. (Editor)

    1978-01-01

    The Planetary Geology Field Conference on Aeolian Processes was organized at the request of the Planetary Geology Program office of the National Aeronautics and Space Administration to bring together geologists working on aeolian problems on earth and planetologists concerned with similar problems on the planets. Abstracts of papers presented at the conference are arranged herein by alphabetical order of the senior author. Papers fall into three broad categories: (1) Viking Orbiter and Viking Lander results on aeolian processes and/or landforms on Mars, (2) laboratory results on studies of aeolian processes, and (3) photogeology and field studies of aeolian processes on Earth.

  13. Geology of the Icy Galilean Satellites: Understanding Crustal Processes and Geologic Histories Through the JIMO Mission

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Tanaka, K.; Senske, D.; Greeley, R.

    2003-01-01

    Knowledge of the geology, style and time history of crustal processes on the icy Galilean satellites is necessary to understanding how these bodies formed and evolved. Data from the Galileo mission have provided a basis for detailed geologic and geo- physical analysis. Due to constrained downlink, Galileo Solid State Imaging (SSI) data consisted of global coverage at a -1 km/pixel ground sampling and representative, widely spaced regional maps at -200 m/pixel. These two data sets provide a general means to extrapolate units identified at higher resolution to lower resolution data. A sampling of key sites at much higher resolution (10s of m/pixel) allows evaluation of processes on local scales. We are currently producing the first global geological map of Europa using Galileo global and regional-scale data. This work is demonstrating the necessity and utility of planet-wide contiguous image coverage at global, regional, and local scales.

  14. Planetary Geology: A Teacher's Guide with Activities in Physical and Earth Sciences.

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Washington, DC.

    This educator's guide discusses planetary geology. Exercises are grouped into five units: (1) introduction to geologic processes; (2) impact cratering activities; (3) planetary atmospheres; (4) planetary surfaces; and (5) geologic mapping. Suggested introductory exercises are noted at the beginning of each exercise. Each activity includes an…

  15. Measuring Student Knowledge of Landscapes and Their Formation Timespans

    ERIC Educational Resources Information Center

    Jolley, Alison; Jones, Francis; Harris, Sara

    2013-01-01

    Geologic time is a crucial component of any geoscientist's training. Essential knowledge of geologic time includes rates of geologic processes and the associated time it takes for geologic features to form, yet measuring conceptual thinking abilities in these domains is challenging. We describe development and initial application of the Landscape…

  16. Database for the geologic map of Upper Geyser Basin, Yellowstone National Park, Wyoming

    USGS Publications Warehouse

    Abendini, Atosa A.; Robinson, Joel E.; Muffler, L. J. Patrick; White, D. E.; Beeson, Melvin H.; Truesdell, A. H.

    2015-01-01

    This dataset contains contacts, geologic units, and map boundaries from Miscellaneous Investigations Series Map I-1371, "The Geologic map of upper Geyser Basin, Yellowstone, National Park, Wyoming". This dataset was constructed to produce a digital geologic map as a basis for ongoing studies of hydrothermal processes.

  17. Fluvial processes in Puget Sound rivers and the Pacific Northwest [Chapter 3

    Treesearch

    John M. Buffington; Richard D. Woodsmith; Derek B. Booth; David R. Montgomery

    2003-01-01

    The variability of topography, geology, climate; vegetation, and land use in the Pacific Northwest creates considerable spatial and temporal variability of fluvial processes and reach-scale channel type. Here we identify process domains of typical Pacific Northwest watersheds and examine local physiographic and geologic controls on channel processes and response...

  18. Solar cosmic rays as a specific source of radiation risk during piloted space flight.

    PubMed

    Petrov, V M

    2004-01-01

    Solar cosmic rays present one of several radiation sources that are unique to space flight. Under ground conditions the exposure to individuals has a controlled form and radiation risk occurs as stochastic radiobiological effects. Existence of solar cosmic rays in space leads to a stochastic mode of radiation environment as a result of which any radiobiological consequences of exposure to solar cosmic rays during the flight will be probabilistic values. In this case, the hazard of deterministic effects should also be expressed in radiation risk values. The main deterministic effect under space conditions is radiation sickness. The best dosimetric functional for its analysis is the blood forming organs dose equivalent but not an effective dose. In addition, the repair processes in red bone marrow affect strongly on the manifestation of this pathology and they must be taken into account for radiation risk assessment. A method for taking into account the mentioned above peculiarities for the solar cosmic rays radiation risk assessment during the interplanetary flights is given in the report. It is shown that radiation risk of deterministic effects defined, as the death probability caused by radiation sickness due to acute solar cosmic rays exposure, can be comparable to risk of stochastic effects. Its value decreases strongly because of the fractional mode of exposure during the orbital movement of the spacecraft. On the contrary, during the interplanetary flight, radiation risk of deterministic effects increases significantly because of the residual component of the blood forming organs dose from previous solar proton events. The noted quality of radiation responses must be taken into account for estimating radiation hazard in space. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  20. Geology

    NASA Technical Reports Server (NTRS)

    Stewart, R. K.; Sabins, F. F., Jr.; Rowan, L. C.; Short, N. M.

    1975-01-01

    Papers from private industry reporting applications of remote sensing to oil and gas exploration were presented. Digitally processed LANDSAT images were successfully employed in several geologic interpretations. A growing interest in digital image processing among the geologic user community was shown. The papers covered a wide geographic range and a wide technical and application range. Topics included: (1) oil and gas exploration, by use of radar and multisensor studies as well as by use of LANDSAT imagery or LANDSAT digital data, (2) mineral exploration, by mapping from LANDSAT and Skylab imagery and by LANDSAT digital processing, (3) geothermal energy studies with Skylab imagery, (4) environmental and engineering geology, by use of radar or LANDSAT and Skylab imagery, (5) regional mapping and interpretation, and digital and spectral methods.

  1. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  2. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  3. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  4. A PC-based magnetometer-only attitude and rate determination system for gyroless spacecraft

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.; Deutschmann, J.; Galal, K.

    1995-01-01

    This paper describes a prototype PC-based system that uses measurements from a three-axis magnetometer (TAM) to estimate the state (three-axis attitude and rates) of a spacecraft given no a priori information other than the mass properties. The system uses two algorithms that estimate the spacecraft's state - a deterministic magnetic-field only algorithm and a Kalman filter for gyroless spacecraft. The algorithms are combined by invoking the deterministic algorithm to generate the spacecraft state at epoch using a small batch of data and then using this deterministic epoch solution as the initial condition for the Kalman filter during the production run. System input comprises processed data that includes TAM and reference magnetic field data. Additional information, such as control system data and measurements from line-of-sight sensors, can be input to the system if available. Test results are presented using in-flight data from two three-axis stabilized spacecraft: Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) (gyroless, Sun-pointing) and Earth Radiation Budget Satellite (ERBS) (gyro-based, Earth-pointing). The results show that, using as little as 700 s of data, the system is capable of accuracies of 1.5 deg in attitude and 0.01 deg/s in rates; i.e., within SAMPEX mission requirements.

  5. Development of TIF based figuring algorithm for deterministic pitch tool polishing

    NASA Astrophysics Data System (ADS)

    Yi, Hyun-Su; Kim, Sug-Whan; Yang, Ho-Soon; Lee, Yun-Woo

    2007-12-01

    Pitch is perhaps the oldest material used for optical polishing, leaving superior surface texture, and has been used widely in the optics shop floor. However, for its unpredictable controllability of removal characteristics, the pitch tool polishing has been rarely analysed quantitatively and many optics shops rely heavily on optician's "feel" even today. In order to bring a degree of process controllability to the pitch tool polishing, we added motorized tool motions to the conventional Draper type polishing machine and modelled the tool path in the absolute machine coordinate. We then produced a number of Tool Influence Function (TIF) both from an analytical model and a series of experimental polishing runs using the pitch tool. The theoretical TIFs agreed well with the experimental TIFs to the profile accuracy of 79 % in terms of its shape. The surface figuring algorithm was then developed in-house utilizing both theoretical and experimental TIFs. We are currently undertaking a series of trial figuring experiments to prove the performance of the polishing algorithm, and the early results indicate that the highly deterministic material removal control with the pitch tool can be achieved to a certain level of form error. The machine renovation, TIF theory and experimental confirmation, figuring simulation results are reported together with implications to deterministic polishing.

  6. Deterministic photon-emitter coupling in chiral photonic circuits.

    PubMed

    Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter

    2015-09-01

    Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.

  7. Deterministic photon-emitter coupling in chiral photonic circuits

    NASA Astrophysics Data System (ADS)

    Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter

    2015-09-01

    Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.

  8. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  9. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  10. Influence of the hypercycle on the error threshold: a stochastic approach.

    PubMed

    García-Tejedor, A; Sanz-Nuño, J C; Olarrea, J; Javier de la Rubia, F; Montero, F

    1988-10-21

    The role of fluctuations on the error threshold of the hypercycle has been studied by a stochastic approach on a very simplified model. For this model, the master equation was derived and its unique steady state calculated. This state implies the extinction of the system. But the actual time necessary to reach the steady state may be astronomically long whereas for times of experimental interest the system could be near some quasi-stationary states. In order to explore this possibility a Gillespie simulation of the stochastic process has been carried out. These quasi-stationary states correspond to the deterministic steady states of the system. The error threshold shifts towards higher values of the quality factor Q. Moreover, information about the fluctuations around the quasi-stationary states is obtained. The results are discussed in relation to the deterministic states.

  11. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  12. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  13. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  14. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  15. Stochastic and deterministic processes regulate spatio-temporal variation in seed bank diversity

    Treesearch

    Alejandro A. Royo; Todd E. Ristau

    2013-01-01

    Seed banks often serve as reservoirs of taxonomic and genetic diversity that buffer plant populations and influence post-disturbance vegetation trajectories; yet evaluating their importance requires understanding how their composition varies within and across spatial and temporal scales (α- and β-diversity). Shifts in seed bank diversity are strongly...

  16. Losers in the 'Rock-Paper-Scissors' game: The role of non-hierarchical competition and chaos as biodiversity sustaining agents in aquatic systems

    EPA Science Inventory

    Processes occurring within small areas (patch-scale) that influence species richness and spatial heterogeneity of larger areas (landscape-scale) have long been an interest of ecologists. This research focused on the role of patch-scale deterministic chaos arising in phytoplankton...

  17. PLANNING MODELS FOR URBAN WATER SUPPLY EXPANSION. VOLUME 1. PLANNING FOR THE EXPANSION OF REGIONAL WATER SUPPLY SYSTEMS

    EPA Science Inventory

    A three-volume report was developed relative to the modelling of investment strategies for regional water supply planning. Volume 1 is the study of capacity expansion over time. Models to aid decision making for the deterministic case are presented, and a planning process under u...

  18. Computational analysis of the roles of biochemical reactions in anomalous diffusion dynamics

    NASA Astrophysics Data System (ADS)

    Naruemon, Rueangkham; Charin, Modchang

    2016-04-01

    Most biochemical processes in cells are usually modeled by reaction-diffusion (RD) equations. In these RD models, the diffusive process is assumed to be Gaussian. However, a growing number of studies have noted that intracellular diffusion is anomalous at some or all times, which may result from a crowded environment and chemical kinetics. This work aims to computationally study the effects of chemical reactions on the diffusive dynamics of RD systems by using both stochastic and deterministic algorithms. Numerical method to estimate the mean-square displacement (MSD) from a deterministic algorithm is also investigated. Our computational results show that anomalous diffusion can be solely due to chemical reactions. The chemical reactions alone can cause anomalous sub-diffusion in the RD system at some or all times. The time-dependent anomalous diffusion exponent is found to depend on many parameters, including chemical reaction rates, reaction orders, and chemical concentrations. Project supported by the Thailand Research Fund and Mahidol University (Grant No. TRG5880157), the Thailand Center of Excellence in Physics (ThEP), CHE, Thailand, and the Development Promotion of Science and Technology.

  19. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  20. Geologic Mapping of the NW Rim of Hellas Basin, Mars

    NASA Astrophysics Data System (ADS)

    Crown, D. A.; Bleamaster, L. F.; Mest, S. C.; Mustard, J. F.

    2009-03-01

    Geologic mapping of the NW rim of Hellas basin is providing new constraints on the magnitudes, extents, and history of volatile-driven processes as well as a geologic context for mineralogic identifications.

  1. Geology. Grade 6. Anchorage School District Elementary Science Program.

    ERIC Educational Resources Information Center

    Anchorage School District, AK.

    This resource book introduces sixth-grade children to the environment by studying rocks and other geological features. Nine lessons are provided on a variety of topics including: (1) geologic processes; (2) mountain building; (3) weathering; (4) geologic history and time; (5) plate tectonics; (6) rocks and minerals; (7) mineral properties; (8)…

  2. 30 CFR 580.41 - What types of geological data and information must I submit to BOEM?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (including geochemical) data and information describing each operation of analysis, processing, and... 30 Mineral Resources 2 2014-07-01 2014-07-01 false What types of geological data and information... CONTINENTAL SHELF Data Requirements Geological Data and Information § 580.41 What types of geological data and...

  3. 30 CFR 580.41 - What types of geological data and information must I submit to BOEM?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (including geochemical) data and information describing each operation of analysis, processing, and... 30 Mineral Resources 2 2013-07-01 2013-07-01 false What types of geological data and information... CONTINENTAL SHELF Data Requirements Geological Data and Information § 580.41 What types of geological data and...

  4. Loss estimation in southeast Korea from a scenario earthquake using the deterministic method in HAZUS

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.; Suk, B.; Yoo, H.

    2007-12-01

    Strong ground motion attenuation relationship represents a comprehensive trend of ground shakings at sites with distances from the source, geology, local soil conditions, and others. It is necessary to develop an attenuation relationship with careful considerations of characteristics of the target area for reliable seismic hazard/risk assessments. In the study, observed ground motions from the January 2007 magnitude 4.9 Odaesan earthquake and the events occurring in the Gyeongsang provinces are compared with the previously proposed ground attenuation relationships in the Korean Peninsula to select most appropriate one. In the meantime, a few strong ground motion attenuation relationships are proposed and introduced in HAZUS, which have been designed for the Western United States and the Central and Eastern United States. The selected relationship from the ones for the Korean Peninsula has been compared with attenuation relationships available in HAZUS. Then, the attenuation relation for the Western United States proposed by Sadigh et al. (1997) for the Site Class B has been selected for this study. Reliability of the assessment will be improved by using an appropriate attenuation relation. It has been used for the earthquake loss estimation of the Gyeongju area located in southeast Korea using the deterministic method in HAZUS with a scenario earthquake (M=6.7). Our preliminary estimates show 15.6% damage of houses, shelter needs for about three thousands residents, and 75 life losses in the study area for the scenario events occurring at 2 A.M. Approximately 96% of hospitals will be in normal operation in 24 hours from the proposed event. Losses related to houses will be more than 114 million US dollars. Application of the improved methodology for loss estimation in Korea will help decision makers for planning disaster responses and hazard mitigation.

  5. Status report on the geology of the Oak Ridge Reservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatcher, R.D. Jr.; Lemiszki, P.J.; Foreman, J.L.

    1992-10-01

    This report provides an introduction to the present state of knowledge of the geology of the Oak Ridge Reservation (ORR) and a cursory introduction to the hydrogeology. An important element of this work is the construction of a modern detailed geologic map of the ORR (Plate 1), which remains in progress. An understanding of the geologic framework of the ORR is essential to many current and proposed activities related to land-use planning, waste management, environmental restoration, and waste remediation. Therefore, this report is also intended to convey the present state of knowledge of the geologic and geohydrologic framework of themore » ORR and vicinity and to present some of the available data that provide the basic framework for additional geologic mapping, subsurface geologic, and geohydrologic studies. In addition, some recently completed, detailed work on soils and other surficial materials is included because of the close relationships to bedrock geology and the need to recognize the weathered products of bedrock units. Weathering processes also have some influence on hydrologic systems and processes at depth.« less

  6. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  7. Aniakchak National Monument and Preserve: Geologic resources inventory report

    USGS Publications Warehouse

    Hults, Chad P.; Neal, Christina

    2015-01-01

    This GRI report is a companion document to previously completed GRI digital geologic map data. It was written for resource managers to support science-informed decision making. It may also be useful for interpretation. The report was prepared using available geologic information, and the NPS Geologic Resources Division conducted no new fieldwork in association with its preparation. Sections of the report discuss distinctive geologic features and processes within the park, highlight geologic issues facing resource managers, describe the geologic history leading to the present-day landscape, and provide information about the GRI geologic map data. A poster illustrates these data. The Map Unit Properties Table summarizes report content for each geologic map unit.

  8. Geologic processes influence the effects of mining on aquatic ecosystems

    USGS Publications Warehouse

    Schmidt, Travis S.; Clements, William H.; Wanty, Richard B.; Verplanck, Philip L.; Church, Stan E.; San Juan, Carma A.; Fey, David L.; Rockwell, Barnaby W.; DeWitt, Ed H.; Klein, Terry L.

    2012-01-01

    Geologic processes strongly influence water and sediment quality in aquatic ecosystems but rarely are geologic principles incorporated into routine biomonitoring studies. We test if elevated concentrations of metals in water and sediment are restricted to streams downstream of mines or areas that may discharge mine wastes. We surveyed 198 catchments classified as “historically mined” or “unmined,” and based on mineral-deposit criteria, to determine whether water and sediment quality were influenced by naturally occurring mineralized rock, by historical mining, or by a combination of both. By accounting for different geologic sources of metals to the environment, we were able to distinguish aquatic ecosystems limited by metals derived from natural processes from those due to mining. Elevated concentrations of metals in water and sediment were not restricted to mined catchments; depauperate aquatic communities were found in unmined catchments. The type and intensity of hydrothermal alteration and the mineral deposit type were important determinants of water and sediment quality as well as the aquatic community in both mined and unmined catchments. This study distinguished the effects of different rock types and geologic sources of metals on ecosystems by incorporating basic geologic processes into reference and baseline site selection, resulting in a refined assessment. Our results indicate that biomonitoring studies should account for natural sources of metals in some geologic environments as contributors to the effect of mines on aquatic ecosystems, recognizing that in mining-impacted drainages there may have been high pre-mining background metal concentrations.

  9. Measuring novices' field mapping abilities using an in-class exercise based on expert task analysis

    NASA Astrophysics Data System (ADS)

    Caulkins, J. L.

    2010-12-01

    We are interested in developing a model of expert-like behavior for improving the teaching methods of undergraduate field geology. Our aim is to assist students in mastering the process of field mapping more efficiently and effectively and to improve their ability to think creatively in the field. To examine expert-mapping behavior, a cognitive task analysis was conducted with expert geologic mappers in an attempt to define the process of geologic mapping (i.e. to understand how experts carry out geological mapping). The task analysis indicates that expert mappers have a wealth of geologic scenarios at their disposal that they compare against examples seen in the field, experiences that most undergraduate mappers will not have had. While presenting students with many geological examples in class may increase their understanding of geologic processes, novices still struggle when presented with a novel field situation. Based on the task analysis, a short (45-minute) paper-map-based exercise was designed and tested with 14 pairs of 3rd year geology students. The exercise asks students to generate probable geologic models based on a series of four (4) data sets. Each data set represents a day’s worth of data; after the first “day,” new sheets simply include current and previously collected data (e.g. “Day 2” data set includes data from “Day 1” plus the new “Day 2” data). As the geologic complexity increases, students must adapt, reject or generate new geologic models in order to fit the growing data set. Preliminary results of the exercise indicate that students who produced more probable geologic models, and produced higher ratios of probable to improbable models, tended to go on to do better on the mapping exercises at the 3rd year field school. These results suggest that those students with more cognitively available geologic models may be more able to use these models in field settings than those who are unable to draw on these models for whatever reason. Giving students practice at generating geologic models to explain data may be useful in preparing our students for field mapping exercises.

  10. Constructing a Geology Ontology Using a Relational Database

    NASA Astrophysics Data System (ADS)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances relationship. Based on a Quaternary database of downtown of Foshan city, Guangdong Province, in Southern China, a geological ontology was constructed using the proposed method. To measure the maintenance of semantics in the conversation process and the results, an inverse mapping from the ontology to a relational database was tested based on a proposed conversation rule. The comparison of schema and entities and the reduction of tables between the inverse database and the original database illustrated that the proposed method retains the semantic information well during the conversation process. An application for abstracting sandstone information showed that semantic relationships among concepts in the geological database were successfully reorganized in the constructed ontology. Key words: geological ontology; geological spatial database; multiple inheritance; OWL Acknowledgement: This research is jointly funded by the Specialized Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20100171120001), NSFC (41102207) and the Fundamental Research Funds for the Central Universities (12lgpy19).

  11. Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.

    2016-12-01

    Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.

  12. Persistent ecological shifts in marine molluscan assemblages across the end-Cretaceous mass extinction.

    PubMed

    Aberhan, Martin; Kiessling, Wolfgang

    2015-06-09

    Contemporary biodiversity loss and population declines threaten to push the biosphere toward a tipping point with irreversible effects on ecosystem composition and function. As a potential example of a global-scale regime shift in the geological past, we assessed ecological changes across the end-Cretaceous mass extinction based on molluscan assemblages at four well-studied sites. By contrasting preextinction and postextinction rank abundance and numerical abundance in 19 molluscan modes of life--each defined as a unique combination of mobility level, feeding mode, and position relative to the substrate--we find distinct shifts in ecospace utilization, which significantly exceed predictions from null models. The magnitude of change in functional traits relative to normal temporal fluctuations at far-flung sites indicates that molluscan assemblages shifted to differently structured systems and faunal response was global. The strengths of temporal ecological shifts, however, are mostly within the range of preextinction site-to-site variability, demonstrating that local ecological turnover was similar to geographic variation over a broad latitudinal range. In conjunction with varied site-specific temporal patterns of individual modes of life, these spatial and temporal heterogeneities argue against a concerted phase shift of molluscan assemblages from one well-defined regime to another. At a broader ecological level, by contrast, congruent tendencies emerge and suggest deterministic processes. These patterns comprise the well-known increase of deposit-feeding mollusks in postextinction assemblages and increases in predators and predator-resistant modes of life, i.e., those characterized by elevated mobility and infaunal life habits.

  13. Quaternary geology and geomorphology of the lower Deschutes River Canyon, Oregon.

    Treesearch

    Jim E. O' Connor; Janet H. Curran; Robin A. Beebee; Gordon E. Grant; Andrei Sarna-Wojcicki

    2003-01-01

    The morphology of the Deschutes River canyon downstream of the Pelton-Round Butte dam complex is the product of the regional geologic history, the composition of the geologic units that compose the valley walls, and Quaternary processes and events. Geologic units within the valley walls and regional deformation patterns control overall valley morphology. Valley bottom...

  14. Development of Maximum Considered Earthquake Ground Motion Maps

    USGS Publications Warehouse

    Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.

    2000-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.

  15. Analysis of the Geologic Structure and Compilation of the Geologic Map of the Northern Part of Planet Venus

    NASA Astrophysics Data System (ADS)

    Basilevsky, A. T.; Burba, G. A.; Ivanov, M. A.; Bobina, N. N.; Shashkina, V. P.; Head, J. W.

    Based on an analysis of the images of the Venusian surface obtained by the side-looking radar of the Magellan orbiter, a geologic map of the northern part of Venus (the region extending to the north of the 35°N latitude) at 1 : 10 000 000 scale is compiled. The map of this vast territory, comprising one-fifth of the planet surface, was compiled using only 12 geologic units, which implies a uniform character of terrains and land- forms on the investigated territory and, therefore, the uniformity of geologic processes that occurred on this planet. These units are the products of four main groups of geologic processes that occurred on Venus during the last 0.51 Myr: (1) basaltic volcanism; (2) tectonic compression and tensile deformation; (3) impact crater- ing; and (4) wind-related mobilization, transportation, and deposition of loose fine-grained materials. Basaltic volcanism is the main process that supplies new material on the surface of Venus. Tectonic deformation struc- tures, superposed on the material of different geologic units, determined the morphology of the units and formed the surfaces of unconformity between neighboring units. Ten of 12 geologic units form an age sequence that is virtually identical over the entire mapped territory of the planet. The possible incon- sistency of this sequence caused by anomalous relations existing between smooth plains (Ps) in the southeastern part of Lakshmi Planum and wrinkle ridged plains (Pwr) in the northern part of Sedna Planitia does not destroy this sequence as a whole. The results of our mapping support the model of global stratigraphy of Venus proposed by Basilevsky and Head (19951998) and provide evidence of the quasi-synchronous character of single-type geologic units on different areas of Venus rather than of the absence of synchronism. An analysis of the distribution of impact craters on different geologic units has shown the proximity of mean absolute ages of the material of the surface of Pwr plains, of the entire studied territory, and of the entire Venusian surface. The results of our analysis suggest that, within the area under study, the intensity of the leading geologic processes at the beginning of the studied segment of the geologic history was relatively high but decreased dramatically later.

  16. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Lynn Watney; John H. Doveton

    GEMINI (Geo-Engineering Modeling through Internet Informatics) is a public-domain web application focused on analysis and modeling of petroleum reservoirs and plays (http://www.kgs.ukans.edu/Gemini/index.html). GEMINI creates a virtual project by ''on-the-fly'' assembly and analysis of on-line data either from the Kansas Geological Survey or uploaded from the user. GEMINI's suite of geological and engineering web applications for reservoir analysis include: (1) petrofacies-based core and log modeling using an interactive relational rock catalog and log analysis modules; (2) a well profile module; (3) interactive cross sections to display ''marked'' wireline logs; (4) deterministic gridding and mapping of petrophysical data; (5) calculation and mappingmore » of layer volumetrics; (6) material balance calculations; (7) PVT calculator; (8) DST analyst, (9) automated hydrocarbon association navigator (KHAN) for database mining, and (10) tutorial and help functions. The Kansas Hydrocarbon Association Navigator (KHAN) utilizes petrophysical databases to estimate hydrocarbon pay or other constituent at a play- or field-scale. Databases analyzed and displayed include digital logs, core analysis and photos, DST, and production data. GEMINI accommodates distant collaborations using secure password protection and authorized access. Assembled data, analyses, charts, and maps can readily be moved to other applications. GEMINI's target audience includes small independents and consultants seeking to find, quantitatively characterize, and develop subtle and bypassed pays by leveraging the growing base of digital data resources. Participating companies involved in the testing and evaluation of GEMINI included Anadarko, BP, Conoco-Phillips, Lario, Mull, Murfin, and Pioneer Resources.« less

  18. 30 CFR 551.11 - Submission, inspection, and selection of geological data and information collected under a permit...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... geochemical) data and information describing each operation of analysis, processing, and interpretation; (2...

  19. 30 CFR 580.40 - When do I notify BOEM that geological data and information are available for submission...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... initial analysis, processing, or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become... information are available for submission, inspection, and selection? 580.40 Section 580.40 Mineral Resources...

  20. 30 CFR 551.11 - Submission, inspection, and selection of geological data and information collected under a permit...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... geochemical) data and information describing each operation of analysis, processing, and interpretation; (2...

  1. 30 CFR 580.40 - When do I notify BOEM that geological data and information are available for submission...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... initial analysis, processing, or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become... information are available for submission, inspection, and selection? 580.40 Section 580.40 Mineral Resources...

  2. 30 CFR 551.11 - Submission, inspection, and selection of geological data and information collected under a permit...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... geochemical) data and information describing each operation of analysis, processing, and interpretation; (2...

  3. 30 CFR 580.40 - When do I notify BOEM that geological data and information are available for submission...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... initial analysis, processing, or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become... information are available for submission, inspection, and selection? 580.40 Section 580.40 Mineral Resources...

  4. 30 CFR 280.40 - When do I notify MMS that geological data and information are available for submission...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... complete the initial analysis, processing, or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information... information are available for submission, inspection, and selection? 280.40 Section 280.40 Mineral Resources...

  5. Relevance of deterministic chaos theory to studies in functioning of dynamical systems

    NASA Astrophysics Data System (ADS)

    Glagolev, S. N.; Bukhonova, S. M.; Chikina, E. D.

    2018-03-01

    The paper considers chaotic behavior of dynamical systems typical for social and economic processes. Approaches to analysis and evaluation of system development processes are studies from the point of view of controllability and determinateness. Explanations are given for necessity to apply non-standard mathematical tools to explain states of dynamical social and economic systems on the basis of fractal theory. Features of fractal structures, such as non-regularity, self-similarity, dimensionality and fractionality are considered.

  6. Dissipative production of a maximally entangled steady state of two quantum bits.

    PubMed

    Lin, Y; Gaebler, J P; Reiter, F; Tan, T R; Bowler, R; Sørensen, A S; Leibfried, D; Wineland, D J

    2013-12-19

    Entangled states are a key resource in fundamental quantum physics, quantum cryptography and quantum computation. Introduction of controlled unitary processes--quantum gates--to a quantum system has so far been the most widely used method to create entanglement deterministically. These processes require high-fidelity state preparation and minimization of the decoherence that inevitably arises from coupling between the system and the environment, and imperfect control of the system parameters. Here we combine unitary processes with engineered dissipation to deterministically produce and stabilize an approximate Bell state of two trapped-ion quantum bits (qubits), independent of their initial states. Compared with previous studies that involved dissipative entanglement of atomic ensembles or the application of sequences of multiple time-dependent gates to trapped ions, we implement our combined process using trapped-ion qubits in a continuous time-independent fashion (analogous to optical pumping of atomic states). By continuously driving the system towards the steady state, entanglement is stabilized even in the presence of experimental noise and decoherence. Our demonstration of an entangled steady state of two qubits represents a step towards dissipative state engineering, dissipative quantum computation and dissipative phase transitions. Following this approach, engineered coupling to the environment may be applied to a broad range of experimental systems to achieve desired quantum dynamics or steady states. Indeed, concurrently with this work, an entangled steady state of two superconducting qubits was demonstrated using dissipation.

  7. Allee effects and the spatial dynamics of a locally endangered butterfly, the high brown fritillary (Argynnis adippe).

    PubMed

    Bonsall, Michael B; Dooley, Claire A; Kasparson, Anna; Brereton, Tom; Roy, David B; Thomas, Jeremy A

    2014-01-01

    Conservation of endangered species necessitates a full appreciation of the ecological processes affecting the regulation, limitation, and persistence of populations. These processes are influenced by birth, death, and dispersal events, and characterizing them requires careful accounting of both the deterministic and stochastic processes operating at both local and regional population levels. We combined ecological theory and observations on Allee effects by linking mathematical analysis and the spatial and temporal population dynamics patterns of a highly endangered butterfly, the high brown fritillary, Argynnis adippe. Our theoretical analysis showed that the role of density-dependent feedbacks in the presence of local immigration can influence the strength of Allee effects. Linking this theory to the analysis of the population data revealed strong evidence for both negative density dependence and Allee effects at the landscape or regional scale. These regional dynamics are predicted to be highly influenced by immigration. Using a Bayesian state-space approach, we characterized the local-scale births, deaths, and dispersal effects together with measurement and process uncertainty in the metapopulation. Some form of an Allee effect influenced almost three-quarters of these local populations. Our joint analysis of the deterministic and stochastic dynamics suggests that a conservation priority for this species would be to increase resource availability in currently occupied and, more importantly, in unoccupied sites.

  8. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  9. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model.

    PubMed

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-28

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  10. Patterns of taxonomic, phylogenetic diversity during a long-term succession of forest on the Loess Plateau, China: insights into assembly process

    PubMed Central

    Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang

    2016-01-01

    Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient. PMID:27272407

  11. Patterns of taxonomic, phylogenetic diversity during a long-term succession of forest on the Loess Plateau, China: insights into assembly process.

    PubMed

    Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang

    2016-06-08

    Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient.

  12. Namib Desert edaphic bacterial, fungal and archaeal communities assemble through deterministic processes but are influenced by different abiotic parameters.

    PubMed

    Johnson, Riegardt M; Ramond, Jean-Baptiste; Gunnigle, Eoin; Seely, Mary; Cowan, Don A

    2017-03-01

    The central Namib Desert is hyperarid, where limited plant growth ensures that biogeochemical processes are largely driven by microbial populations. Recent research has shown that niche partitioning is critically involved in the assembly of Namib Desert edaphic communities. However, these studies have mainly focussed on the Domain Bacteria. Using microbial community fingerprinting, we compared the assembly of the bacterial, fungal and archaeal populations of microbial communities across nine soil niches from four Namib Desert soil habitats (riverbed, dune, gravel plain and salt pan). Permutational multivariate analysis of variance indicated that the nine soil niches presented significantly different physicochemistries (R 2  = 0.8306, P ≤ 0.0001) and that bacterial, fungal and archaeal populations were soil niche specific (R 2  ≥ 0.64, P ≤ 0.001). However, the abiotic drivers of community structure were Domain-specific (P < 0.05), with P, clay and sand fraction, and NH 4 influencing bacterial, fungal and archaeal communities, respectively. Soil physicochemistry and soil niche explained over 50% of the variation in community structure, and communities displayed strong non-random patterns of co-occurrence. Taken together, these results demonstrate that in central Namib Desert soil microbial communities, assembly is principally driven by deterministic processes.

  13. Cognitive Factors Affecting Student Understanding of Geologic Time.

    ERIC Educational Resources Information Center

    Dodick, Jeff; Orion, Nir

    2003-01-01

    Presents a model that describes how students reconstruct geological transformations over time. Defines the critical factors influencing reconstructive thinking: (1) the transformation scheme, which influences the other diachronic schemes; (2) knowledge of geological processes; and (3) extracognitive factors. (Author/KHR)

  14. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  15. A simulation of remote sensor systems and data processing algorithms for spectral feature classification

    NASA Technical Reports Server (NTRS)

    Arduini, R. F.; Aherron, R. M.; Samms, R. W.

    1984-01-01

    A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.

  16. POTENTIAL CLIMATE WARMING EFFECTS ON ICE COVERS OF SMALL LAKES IN THE CONTIGUOUS U.S. (R824801)

    EPA Science Inventory

    Abstract

    To simulate effects of projected climate change on ice covers of small lakes in the northern contiguous U.S., a process-based simulation model is applied. This winter ice/snow cover model is associated with a deterministic, one-dimensional year-round water tem...

  17. Task-Based Interaction and Incidental Vocabulary Learning: A Case Study.

    ERIC Educational Resources Information Center

    Newton, Jonathan

    1995-01-01

    This case study examined the vocabulary gains made by an adult learner of English as a Second Language as a result of performing four communication tasks. It found that explicit negotiation of word meaning appeared less deterministic of posttest improvements than use of words in the process of completing the task. (13 references) (MDM)

  18. Teaching and Learning the Interplay between Chance and Determinism in Nonlinear Systems

    ERIC Educational Resources Information Center

    Stavrou, Dimitrios; Duit, Reinders

    2014-01-01

    That the interplay of random and deterministic processes may result in both the limited predictability of nonlinear systems and the formation of structures seems to be a most valuable general insight into the nature of science. This study investigates the possibility of teaching and learning the interplay of chance and determinism in nonlinear…

  19. Contextuality, Nonlocality and Counterfactual Arguments

    NASA Astrophysics Data System (ADS)

    Ghirardi, Gian Carlo; Wienand, Karl

    2009-07-01

    In this paper, following an elementary line of thought which somewhat differs from the usual one, we prove once more that any deterministic theory predictively equivalent to quantum mechanics unavoidably exhibits a contextual character. The purpose of adopting this perspective is that of paving the way for a critical analysis of the use of counterfactual arguments when dealing with nonlocal physical processes.

  20. Creating a stage-based deterministic PVA model - the western prairie fringed orchid [Exercise 12

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    Contemporary efforts to conserve populations and species often employ population viability analysis (PVA), a specific application of population modeling that estimates the effects of environmental and demographic processes on population growth rates. These models can also be used to estimate probabilities that a population will fall below a certain level. This...

  1. The Random-Effect DINA Model

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…

  2. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  3. Students' Understanding of Large Numbers as a Key Factor in Their Understanding of Geologic Time

    ERIC Educational Resources Information Center

    Cheek, Kim A.

    2012-01-01

    An understanding of geologic time is comprised of 2 facets. Events in Earth's history can be placed in relative and absolute temporal succession on a vast timescale. Rates of geologic processes vary widely, and some occur over time periods well outside human experience. Several factors likely contribute to an understanding of geologic time, one of…

  4. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  5. 3D Geological Mapping - uncovering the subsurface to increase environmental understanding

    NASA Astrophysics Data System (ADS)

    Kessler, H.; Mathers, S.; Peach, D.

    2012-12-01

    Geological understanding is required for many disciplines studying natural processes from hydrology to landscape evolution. The subsurface structure of rocks and soils and their properties occupies three-dimensional (3D) space and geological processes operate in time. Traditionally geologists have captured their spatial and temporal knowledge in 2 dimensional maps and cross-sections and through narrative, because paper maps and later two dimensional geographical information systems (GIS) were the only tools available to them. Another major constraint on using more explicit and numerical systems to express geological knowledge is the fact that a geologist only ever observes and measures a fraction of the system they study. Only on rare occasions does the geologist have access to enough real data to generate meaningful predictions of the subsurface without the input of conceptual understanding developed from and knowledge of the geological processes responsible for the deposition, emplacement and diagenesis of the rocks. This in turn has led to geology becoming an increasingly marginalised science as other disciplines have embraced the digital world and have increasingly turned to implicit numerical modelling to understand environmental processes and interactions. Recent developments in geoscience methodology and technology have gone some way to overcoming these barriers and geologists across the world are beginning to routinely capture their knowledge and combine it with all available subsurface data (of often highly varying spatial distribution and quality) to create regional and national geological three dimensional geological maps. This is re-defining the way geologists interact with other science disciplines, as their concepts and knowledge are now expressed in an explicit form that can be used downstream to design process models structure. For example, groundwater modellers can refine their understanding of groundwater flow in three dimensions or even directly parameterize their numerical models using outputs from 3D mapping. In some cases model code is being re-designed in order to deal with the increasing geological complexity expressed by Geologists. These 3D maps contain have inherent uncertainty, just as their predecessors, 2D geological maps had, and there remains a significant body of work to quantify and effectively communicate this uncertainty. Here we present examples of regional and national 3D maps from Geological Survey Organisations worldwide and how these are being used to better solve real-life environmental problems. The future challenge for geologists is to make these 3D maps easily available in an accessible and interoperable form so that the environmental science community can truly integrate the hidden subsurface into a common understanding of the whole geosphere.

  6. The identity of the North East of England has been shaped by the rocks beneath our feet

    NASA Astrophysics Data System (ADS)

    Shields, Deborah

    2017-04-01

    Geology and Geography students within England learn about the earth's processes and human processes, however it is not always easy for them to see the link between them and to their own lives. The changes to the specification within A-level Geography has seen an emphasis on how processes are linked to their own lives and the local area. I am fortunate to teach both Geography and Geology and I want my students who study both subjects to appreciate the links within the subjects. I also want them to appreciate the local geology and see how it has shaped the North East of England. I have therefore, created a series of lessons to help them to explore the local geology and place identity of the North East of England. To help them to develop an understanding of how the local geology influences place identity. I have used an enquiry based approach which uses the KWL chart and a concept map for students to demonstrate their understanding. These lessons are structured using the learning cycle. The lessons are differentiated through the use of cheat sheets, different levels of hand-outs and grouping of students. The learning objectives are:- 1. Describe the Geology of the North East of England. 2. Explain at least one process which has formed local geology. 3. Define place identity. 4. Discuss the North East of England's identity. 5. Discuss how the local Geology has influenced the North East of England's identity. The North East of England's geology mainly consists of coal and limestone. There is rich industrial heritage of the North East which is based around coal mining. Therefore, coal mining has had a great impact on the identity of the North East of England. There are also a number of different SSSIs which is due to the Magnesium limestone in the area, which has helped to shape the identity of the region. There are a number of areas of outstanding natural beauty due to the local geology and this has helped to create a positive identity for the North East of England.

  7. Mesoscopic and continuum modelling of angiogenesis

    PubMed Central

    Spill, F.; Guerrero, P.; Alarcon, T.; Maini, P. K.; Byrne, H. M.

    2016-01-01

    Angiogenesis is the formation of new blood vessels from pre-existing ones in response to chemical signals secreted by, for example, a wound or a tumour. In this paper, we propose a mesoscopic lattice-based model of angiogenesis, in which processes that include proliferation and cell movement are considered as stochastic events. By studying the dependence of the model on the lattice spacing and the number of cells involved, we are able to derive the deterministic continuum limit of our equations and compare it to similar existing models of angiogenesis. We further identify conditions under which the use of continuum models is justified, and others for which stochastic or discrete effects dominate. We also compare different stochastic models for the movement of endothelial tip cells which have the same macroscopic, deterministic behaviour, but lead to markedly different behaviour in terms of production of new vessel cells. PMID:24615007

  8. Extreme current fluctuations in lattice gases: Beyond nonequilibrium steady states

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Sasorov, Pavel V.

    2014-01-01

    We use the macroscopic fluctuation theory (MFT) to study large current fluctuations in nonstationary diffusive lattice gases. We identify two universality classes of these fluctuations, which we call elliptic and hyperbolic. They emerge in the limit when the deterministic mass flux is small compared to the mass flux due to the shot noise. The two classes are determined by the sign of compressibility of effective fluid, obtained by mapping the MFT into an inviscid hydrodynamics. An example of the elliptic class is the symmetric simple exclusion process, where, for some initial conditions, we can solve the effective hydrodynamics exactly. This leads to a super-Gaussian extreme current statistics conjectured by Derrida and Gerschenfeld [J. Stat. Phys. 137, 978 (2009), 10.1007/s10955-009-9830-1] and yields the optimal path of the system. For models of the hyperbolic class, the deterministic mass flux cannot be neglected, leading to a different extreme current statistics.

  9. Hyperchaotic Dynamics for Light Polarization in a Laser Diode

    NASA Astrophysics Data System (ADS)

    Bonatto, Cristian

    2018-04-01

    It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.

  10. Dynamic speckle - Interferometry of micro-displacements

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2012-06-01

    The problem of the dynamics of speckles in the image plane of the object, caused by random movements of scattering centers is solved. We consider three cases: 1) during the observation the points move at random, but constant speeds, and 2) the relative displacement of any pair of points is a continuous random process, and 3) the motion of the centers is the sum of a deterministic movement and random displacement. For the cases 1) and 2) the characteristics of temporal and spectral autocorrelation function of the radiation intensity can be used for determining of individually and the average relative displacement of the centers, their dispersion and the relaxation time. For the case 3) is showed that under certain conditions, the optical signal contains a periodic component, the number of periods is proportional to the derivations of the deterministic displacements. The results of experiments conducted to test and application of theory are given.

  11. Deterministic versus stochastic model of reprogramming: new evidence from cellular barcoding technique

    PubMed Central

    Yunusova, Anastasia M.; Fishman, Veniamin S.; Vasiliev, Gennady V.

    2017-01-01

    Factor-mediated reprogramming of somatic cells towards pluripotency is a low-efficiency process during which only small subsets of cells are successfully reprogrammed. Previous analyses of the determinants of the reprogramming potential are based on average measurements across a large population of cells or on monitoring a relatively small number of single cells with live imaging. Here, we applied lentiviral genetic barcoding, a powerful tool enabling the identification of familiar relationships in thousands of cells. High-throughput sequencing of barcodes from successfully reprogrammed cells revealed a significant number of barcodes from related cells. We developed a computer model, according to which a probability of synchronous reprogramming of sister cells equals 10–30%. We conclude that the reprogramming success is pre-established in some particular cells and, being a heritable trait, can be maintained through cell division. Thus, reprogramming progresses in a deterministic manner, at least at the level of cell lineages. PMID:28446707

  12. No-go theorem for passive single-rail linear optical quantum computing.

    PubMed

    Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A

    2013-01-01

    Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.

  13. Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning.

    PubMed

    Costa, Vincent D; Dal Monte, Olga; Lucas, Daniel R; Murray, Elisabeth A; Averbeck, Bruno B

    2016-10-19

    Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL, we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with an RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys' choice reaction times, which emphasized a speed-accuracy trade-off that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. Published by Elsevier Inc.

  14. Amygdala and ventral striatum make distinct contributions to reinforcement learning

    PubMed Central

    Costa, Vincent D.; Monte, Olga Dal; Lucas, Daniel R.; Murray, Elisabeth A.; Averbeck, Bruno B.

    2016-01-01

    Summary Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with a RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys’ choice reaction times, which emphasized a speed-accuracy tradeoff that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. PMID:27720488

  15. Deterministic secure quantum communication using a single d-level system.

    PubMed

    Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun

    2017-03-22

    Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected.

  16. Deterministic implementation of a bright, on-demand single photon source with near-unity indistinguishability via quantum dot imaging.

    PubMed

    He, Yu-Ming; Liu, Jin; Maier, Sebastian; Emmerling, Monika; Gerhardt, Stefan; Davanço, Marcelo; Srinivasan, Kartik; Schneider, Christian; Höfling, Sven

    2017-07-20

    Deterministic techniques enabling the implementation and engineering of bright and coherent solid-state quantum light sources are key for the reliable realization of a next generation of quantum devices. Such a technology, at best, should allow one to significantly scale up the number of implemented devices within a given processing time. In this work, we discuss a possible technology platform for such a scaling procedure, relying on the application of nanoscale quantum dot imaging to the pillar microcavity architecture, which promises to combine very high photon extraction efficiency and indistinguishability. We discuss the alignment technology in detail, and present the optical characterization of a selected device which features a strongly Purcell-enhanced emission output. This device, which yields an extraction efficiency of η = (49 ± 4) %, facilitates the emission of photons with (94 ± 2.7) % indistinguishability.

  17. Unifying Complexity and Information

    NASA Astrophysics Data System (ADS)

    Ke, Da-Guan

    2013-04-01

    Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.

  18. Deterministic control of radiative processes by shaping the mode field

    NASA Astrophysics Data System (ADS)

    Pellegrino, D.; Pagliano, F.; Genco, A.; Petruzzella, M.; van Otten, F. W.; Fiore, A.

    2018-04-01

    Quantum dots (QDs) interacting with confined light fields in photonic crystal cavities represent a scalable light source for the generation of single photons and laser radiation in the solid-state platform. The complete control of light-matter interaction in these sources is needed to fully exploit their potential, but it has been challenging due to the small length scales involved. In this work, we experimentally demonstrate the control of the radiative interaction between InAs QDs and one mode of three coupled nanocavities. By non-locally moulding the mode field experienced by the QDs inside one of the cavities, we are able to deterministically tune, and even inhibit, the spontaneous emission into the mode. The presented method will enable the real-time switching of Rabi oscillations, the shaping of the temporal waveform of single photons, and the implementation of unexplored nanolaser modulation schemes.

  19. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  20. Geological hazard monitoring system in Georgia

    NASA Astrophysics Data System (ADS)

    Gaprindashvili, George

    2017-04-01

    Georgia belongs to one of world's most complex mountainous regions according to the scale and frequency of Geological processes and damage caused to population, farmlands, and Infrastructure facilities. Geological hazards (landslide, debrisflow/mudflow, rockfall, erosion and etc.) are affecting many populated areas, agricultural fields, roads, oil and gas pipes, high-voltage electric power transmission towers, hydraulic structures, and tourist complexes. Landslides occur almost in all geomorphological zones, resulting in wide differentiation in the failure types and mechanisms and in the size-frequency distribution. In Georgia, geological hazards triggered by: 1. Activation of highly intense earthquakes; 2. Meteorological events provoking the disaster processes on the background of global climatic change; 3. Large-scale Human impact on the environment. The prediction and monitoring of Geological Hazards is a very wide theme, which involves different researchers from different spheres. Geological hazard monitoring is essential to prevent and mitigate these hazards. In past years in Georgia several monitoring system, such as Ground-based geodetic techniques, Debrisflow Early Warning System (EWS) were installed on high sensitive landslide and debrisflow areas. This work presents description of Geological hazard monitoring system in Georgia.

  1. Radiometric Dating in Geology.

    ERIC Educational Resources Information Center

    Pankhurst, R. J.

    1980-01-01

    Described are several aspects and methods of quantitatively measuring geologic time using a constant-rate natural process of radioactive decay. Topics include half lives and decay constants, radiogenic growth, potassium-argon dating, rubidium-strontium dating, and the role of geochronology in support of geological exploration. (DS)

  2. Publications of the Western Earth Surface Processes Team 2006

    USGS Publications Warehouse

    Powell, Charles L.; Stone, Paul

    2007-01-01

    The Western Earth Surface Processes Team (WESPT) of the U.S. Geological Survey (USGS) conducts geologic mapping, earth-surface process investigations, and related topical earth science studies in the western United States. This work is focused on areas where modern geologic maps and associated earth-science data are needed to address key societal and environmental issues such as ground-water quality, landslides and other potential geologic hazards, and land-use decisions. Areas of primary emphasis in 2006 included southern California, the San Francisco Bay region, the Mojave Desert, the Colorado Plateau region of northern Arizona, and the Pacific Northwest. The team has its headquarters in Menlo Park, California, and maintains smaller field offices at several other locations in the western United States. This compilation gives the bibliographical citations for 123 new publications, most of which are available online using the hyperlinks provided.

  3. 30 CFR 280.40 - When do I notify MMS that geological data and information are available for submission...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., processing, or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... information are available for submission, inspection, and selection? 280.40 Section 280.40 Mineral Resources...

  4. Integration of 3D photogrammetric outcrop models in the reservoir modelling workflow

    NASA Astrophysics Data System (ADS)

    Deschamps, Remy; Joseph, Philippe; Lerat, Olivier; Schmitz, Julien; Doligez, Brigitte; Jardin, Anne

    2014-05-01

    3D technologies are now widely used in geosciences to reconstruct outcrops in 3D. The technology used for the 3D reconstruction is usually based on Lidar, which provides very precise models. Such datasets offer the possibility to build well-constrained outcrop analogue models for reservoir study purposes. The photogrammetry is an alternate methodology which principles are based in determining the geometric properties of an object from photographic pictures taken from different angles. Outcrop data acquisition is easy, and this methodology allows constructing 3D outcrop models with many advantages such as: - light and fast acquisition, - moderate processing time (depending on the size of the area of interest), - integration of field data and 3D outcrops into the reservoir modelling tools. Whatever the method, the advantages of digital outcrop model are numerous as already highlighted by Hodgetts (2013), McCaffrey et al. (2005) and Pringle et al. (2006): collection of data from otherwise inaccessible areas, access to different angles of view, increase of the possible measurements, attributes analysis, fast rate of data collection, and of course training and communication. This paper proposes a workflow where 3D geocellular models are built by integrating all sources of information from outcrops (surface picking, sedimentological sections, structural and sedimentary dips…). The 3D geomodels that are reconstructed can be used at the reservoir scale, in order to compare the outcrop information with subsurface models: the detailed facies models of the outcrops are transferred into petrophysical and acoustic models, which are used to test different scenarios of seismic and fluid flow modelling. The detailed 3D models are also used to test new techniques of static reservoir modelling, based either on geostatistical approaches or on deterministic (process-based) simulation techniques. A modelling workflow has been designed to model reservoir geometries and properties from 3D outcrop data, including geostatistical modelling and fluid flow simulations The case study is a turbidite reservoir analog in Northern Spain (Ainsa). In this case study, we can compare reservoir models that have been built with conventional data set (1D pseudowells), and reservoir model built from 3D outcrop data directly used to constrain the reservoir architecture. This approach allows us to assess the benefits of integrating geotagged 3D outcrop data into reservoir models. References: HODGETTS, D., (2013): Laser scanning and digital outcrop geology in the petroleum industry : a review. Marine and Petroleum Geology, 46, 335-354. McCAFFREY, K.J.W., JONES, R.R., HOLDSWORTH, R.E., WILSON, R.W., CLEGG, P., IMBER, J., HOLLIMAN, N., TRINKS, I., (2005): Unlocking the spatial dimension: digital technologies and the future of geoscience fieldwork. Journal of the Geological Society 162, 927-938 PRINGLE, J.K., HOWELL, J.A., HODGETTS, D., WESTERMAN, A.R., HODGSON, D.M., 2006. Virtual outcrop models of petroleum reservoir analogues: a review of the current state-of-the-art. First Break 24, 33-42.

  5. Geological Survey research 1981

    USGS Publications Warehouse

    ,

    1982-01-01

    This U.S. Geological Survey activities report includes a summary of 1981 fiscal year scientific and economic results accompanied by a list of geologic, hydrologic, and cartographic investigations in progress. The summary of results includes: (1) Mineral, (2) Water resources, (3) Engineering geology and hydrology, (4) Regional geology, (5) Principles and processes, (6) Laboratory and field methods, (7) Topographic surveys and mapping, (8) Management of resources on public lands, (9) Land information and analysis, and (10) Investigations in other countries. Also included are lists of investigations in progress.

  6. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  7. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  8. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  9. Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DePaolo, D. J.; Orr, F. M.; Benson, S. M.

    2007-06-01

    To identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems.

  10. Creation of the Teton landscape: A geologic chronicle of Jackson Hole and The Teton Range

    USGS Publications Warehouse

    Reed, John Calvin; Love, David; Pierce, Kenneth

    2003-01-01

    Geology is the science of the Earth-the study of the forces, processes, and past life that not only shape our land but influence our daily lives and our Nation's welfare. This booklet, prepared by two members of the U.S. Geological Survey, discusses how geologic phenomena are responsible for the magnificent scenery of the Teton region

  11. Geological Survey research 1976

    USGS Publications Warehouse

    ,

    1976-01-01

    This U.S. Geological Survey activities report includes a summary of recent (1976 fiscal year) scientific and economic results accompanied by a list of geologic and hydrologic investigations in progress and a report on the status of topographic mapping. The summary of results includes: (1) Mineral resources, Water resources, (2) Engineering geology and hydrology, (3) Regional geology, (4) Principles and processes, (5) Laboratory and field methods, (6) Topographic surveys and mapping, (7) Management of resources on public lands, (8) Land information and analysis, and (9) Investigations in other countries. Also included are lists of cooperating agencies and Geological Survey offices. (Woodard-USGS)

  12. Geological Survey research 1978

    USGS Publications Warehouse

    ,

    1978-01-01

    This U.S. Geological Survey activities report includes a summary of 1978 fiscal year scientific and economic results accompanied by a list of geologic and hydrologic investigations in progress and a report on the status of topographic mapping. The summary of results includes: (1) Mineral and water resources, (2) Engineering geology and hydrology, (3) Regional geology, (4) Principles and processes, (5) Laboratory and field methods, (6) Topographic surveys and mapping, (7) Management of resources on public lands, (8) Land information and analysis, and (9) Investigations in other countries. Also included are lists of cooperating agencies and Geological Survey offices. (Woodard-USGS)

  13. One application of mega-geomorphology in education

    NASA Technical Reports Server (NTRS)

    Blair, R. W., Jr.

    1985-01-01

    One advantage of a synoptic view displaying landform assemblages provided by imagery is that one can often identify geomorphic processes which have shaped the region and which may affect the habitability of the area over a human life time. Considering the continued growth of the world population and the resultant pressure and the exploitation of land, usually without any consideration given to geologic processes, it is imperative that we attempt to educate as large a segment of the population as we can about geologic processes and how they influence land use. Space platform imagery which exhibits regional landscapes can be used: (1) to show students the impact of geologic processes over relatively short periods of time (e.g., the Mount St. Helens lateral blast); (2) to display the effects of poor planning because of a lack of knowledge of the local geologic processes (e.g., the 1973 image of the Mississippi River flood around St. Louis, MO); and (3) to show the association of certain types of landforms with building materials and other resources (e.g., drumlins and gravel deposits).

  14. Land surface hydrology parameterization for atmospheric general circulation models including subgrid scale spatial variability

    NASA Technical Reports Server (NTRS)

    Entekhabi, D.; Eagleson, P. S.

    1989-01-01

    Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.

  15. A Resume of Stochastic, Time-Varying, Linear System Theory with Application to Active-Sonar Signal-Processing Problems

    DTIC Science & Technology

    1981-06-15

    relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN

  16. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  17. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  18. Chaos-order transition in foraging behavior of ants.

    PubMed

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-06-10

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants' physical abilities, and ants' knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal.

  19. Deterministic control of the emission from light sources in 1D nanoporous photonic crystals (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Galisteo-López, Juan F.

    2017-02-01

    Controlling the emission of a light source demands acting on its local photonic environment via the local density of states (LDOS). Approaches to exert such control on large scale samples, commonly relying on self-assembly methods, usually lack from a precise positioning of the emitter within the material. Alternatively expensive and time consuming techniques can be used to produce samples of small dimensions where a deterministic control on emitter position can be achieved. In this work we present a full solution process approach to fabricate photonic architectures containing nano-emitters which position can be controlled with nanometer precision over squared milimiter regions. By a combination of spin and dip coating we fabricate one-dimensional (1D) nanoporous photonic crystals, which potential in different fields such as photovoltaics or sensing has been previously reported, containing monolayers of luminescent polymeric nanospheres. We demonstrate how, by modifying the position of the emitters within the photonic crystal, their emission properties (photoluminescence intensity and angular distribution) can be deterministically modified. Further, the nano-emitters can be used as a probe to study the LDOS distribution within these systems with a spatial resolution of 25 nm (provided by the probe size) carrying out macroscopic measurements over squared milimiter regions. Routes to enhance light-matter interaction in this kind of systems by combining them with metallic surfaces are finally discussed.

  20. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  1. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  2. Panel summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.

    1987-04-01

    The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less

  3. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Chaos–order transition in foraging behavior of ants

    PubMed Central

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-01-01

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants’ physical abilities, and ants’ knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal. PMID:24912159

  5. A Study of the Education of Geology

    NASA Astrophysics Data System (ADS)

    Berglin, R. S.; Baldridge, A. M.; Buxner, S.; Crown, D. A.

    2013-12-01

    An Evaluation and Assessment Method for Workshops in Science Education and Resources While many professional development workshops train teachers with classroom activities for students, Workshops in Science Education and Resources (WISER): Planetary Perspectives is designed to give elementary and middle school teachers the deeper knowledge necessary to be confident teaching the earth and space science content in their classrooms. Two WISER workshops, Deserts of the Solar System and Volcanoes of the Solar System, place an emphasis on participants being able to use learned knowledge to describe or 'tell the story of' a given rock. In order to understand how participants' knowledge and ability to tell the story changes with instruction, we are investigating new ways of probing the understanding of geologic processes. The study will include results from both college level geology students and teachers, focusing on their understanding of geologic processes and the rock cycle. By studying how new students process geologic information, teachers may benefit by learning how to better teach similar information. This project will help to transfer geologic knowledge to new settings and assess education theories for how people learn. Participants in this study include teachers participating in the WISER program in AZ and introductory level college students at St. Mary's College of California. Participants will be videotaped drawing out their thought process on butcher paper as they describe a given rock. When they are done, they will be asked to describe what they have put on the paper and this interview will be recorded. These techniques will be initially performed with students at St. Mary's College of California to understand how to best gather information. An evaluation of their prior knowledge and previous experience will be determined, and a code of their thought process will be recorded. The same students will complete a semester of an introductory college level Physical Geology course and then complete the assessment process, with the same rock again. Data will be compared to see how the thought process has changed. By studying the initial thought process, teachers can meet students at their level. At the end of the student research, this project will also be applied to elementary and middle school teachers in Tucson, Arizona at WISER workshops. This study will draw conclusions on how participants' thought processes change through WISER-type instruction.

  6. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  7. Geology Before Pluto: Pre-encounter Considerations

    NASA Astrophysics Data System (ADS)

    Moore, J. M.

    2014-12-01

    Pluto, its large satellite Charon, and its four small known satellites represent the first trans-Neptunian Kuiper Belt objects populating the outer-most solar system beyond the gas giant planets to be studied in detail from a spacecraft (New Horizons). A complete picture of the solar nebula and solar system formation cannot be confidently formulated until representatives of this group of bodies at the edge of solar space have been examined. The Pluto system is composed of unique, lunar- and intermediate-sized objects that can tell us much about how objects with volatile icy compositions evolve. Modeling of the interior suggests that geologic activity may have been significant to some degree, and observations of frost on the surface could imply the need for a geologic reservoir for the replenishment of these phases. However, these putative indicators of Pluto's geologic history are inconclusive and unspecific. Detailed examination of Pluto's geologic record is the only plausible means of bridging the gap between theory and observation. In this talk I will examine the potential importance of these tentative indications of geologic activity and how specific spacecraft observations have been designed and used to constrain the Pluto system's geologic history. The cameras of New Horizons will provide robust data sets that should be immanently amenable to geological analysis of the Pluto system's landscapes. In this talk, we begin with a brief discussion of the planned observations by the New Horizons cameras that will bear most directly on geological interpretability. Then I will broadly review major geological processes that could potentially operate on the surfaces of Pluto and its moons. I will first survey exogenic processes (i.e., those for which energy for surface modification is supplied externally to the planetary surface): impact cratering, sedimentary processes (including volatile migration), and the work of wind. I will conclude with an assessment of the prospects for endogenic activity in the form of tectonics and cryovolcanism.

  8. Geology Before Pluto: Pre-encounter Considerations

    NASA Astrophysics Data System (ADS)

    Moore, Jeffrey

    2014-05-01

    Jeffrey M. Moore (NASA Ames) and the New Horizons Science Team Pluto, its large satellite Charon, and its four small known satellites represent the first trans-Neptunian Kuiper Belt objects populating the outer-most solar system beyond the gas giant planets to be studied in detail from a spacecraft (New Horizons). A complete picture of the solar nebula and solar system formation cannot be confidently formulated until representatives of this group of bodies at the edge of solar space have been examined. The Pluto system is composed of unique, lunar- and intermediate-sized objects that can tell us much about how objects with volatile icy compositions evolve. Modeling of the interior suggests that geologic activity may have been significant to some degree, and observations of frost on the surface could imply the need for a geologic reservoir for the replenishment of these phases. However, these putative indicators of Pluto's geologic history are inconclusive and unspecific. Detailed examination of Pluto's geologic record is the only plausible means of bridging the gap between theory and observation. In this talk I will examine the potential importance of these tentative indications of geologic activity and how specific spacecraft observations have been designed and used to constrain the Pluto system's geologic history. The cameras of New Horizons will provide robust data sets that should be immanently amenable to geological analysis of the Pluto System's landscapes. In this talk, we begin with a brief discussion of the planned observations by the New Horizons cameras that will bear most directly on geological interpretability. Then I will broadly review major geological processes that could potentially operate on the surfaces of Pluto and its moons. I will first survey exogenic processes (i.e. those for which energy for surface modification is supplied externally to the planetary surface): impact cratering, sedimentary processes (including volatile migration), and the work of wind. I will conclude with an assessment of the prospects for endogenic activity in the form of tectonics and cryo-volcanism.

  9. Geology Before Pluto: Pre-Encounter Considerations

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.

    2014-01-01

    Pluto, its large satellite Charon, and its four known satellites represent the first trans-Neptunian Kuiper Belt objects populating the outer-most solar system beyond the gas giant planets to be studied in detail from a spacecraft (New Horizons). A complete picture of the solar nebula, and solar system formation cannot be confidently formulated until representatives of this group of bodies at the edge of solar space have been examined. The Pluto system is composed of unique lunar- and intermediate-sized objects that can tell us much about how objects with volatile icy compositions evolve. Modeling of the interior suggests that geologic activity may have been to some degree, and observations of frost on the surface could imply the need for a geologic reservoir for the replenishment of these phases. However, the putative indicators of Pluto's geologic history are inconclusive and unspecific. Detailed examination of Pluto's geologic record is the only plausible means of bridging the gap between theory and observations. In this talk I will examine the potential importance of these tentative indications of geologic activity and how specific spacecraft observations have been designed and used to constrain the Pluto system's geologic history. The cameras of New Horizons will provide robust data sets that should be immanently amenable to geological analysis of the Pluto System's landscapes. In this talk, we begin with a brief discussion of the planned observations by New Horizons' cameras that will bear most directly on geological interpretability. Then I will broadly review major geological processes that could potentially operate of the surfaces of Pluto and its moons. I will first survey exogenic processes (i.e., those for which energy for surface modification is supplied externally to the planetary surface): impact cratering, sedimentary processes (including volatile migration) and the work of wind. I will conclude with an assessment of prospects for endogenic activity in the form of tectonics and cryo-volcanism.

  10. Geologic Mapping Results for Ceres from NASA's Dawn Mission

    NASA Astrophysics Data System (ADS)

    Williams, D. A.; Mest, S. C.; Buczkowski, D.; Scully, J. E. C.; Raymond, C. A.; Russell, C. T.

    2017-12-01

    NASA's Dawn Mission included a geologic mapping campaign during its nominal mission at dwarf planet Ceres, including production of a global geologic map and a series of 15 quadrangle maps to determine the variety of process-related geologic materials and the geologic history of Ceres. Our mapping demonstrates that all major planetary geologic processes (impact cratering, volcanism, tectonism, and gradation (weathering-erosion-deposition)) have occurred on Ceres. Ceres crust, composed of altered and NH3-bearing silicates, carbonates, salts and 30-40% water ice, preserves impact craters and all sizes and degradation states, and may represent the remains of the bottom of an ancient ocean. Volcanism is manifested by cryovolcanic domes, such as Ahuna Mons and Cerealia Facula, and by explosive cryovolcanic plume deposits such as the Vinalia Faculae. Tectonism is represented by several catenae extending from Ceres impact basins Urvara and Yalode, terracing in many larger craters, and many localized fractures around smaller craters. Gradation is manifested in a variety of flow-like features caused by mass wasting (landslides), ground ice flows, as well as impact ejecta lobes and melts. We have constructed a chronostratigraphy and geologic timescale for Ceres that is centered around major impact events. Ceres geologic periods include Pre-Kerwanan, Kerwanan, Yalodean/Urvaran, and Azaccan (the time of rayed craters, similar to the lunar Copernican). The presence of geologically young cryovolcanic deposits on Ceres surface suggests that there could be warm melt pockets within Ceres shallow crust and the dwarf planet remain geologically active.

  11. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  12. Modeling potential future individual tree-species distributions in the eastern United States under a climate change scenario: a case study with Pinus virginiana

    Treesearch

    Louis R. Iverson; Anantha Prasad; Mark W. Schwartz; Mark W. Schwartz

    1999-01-01

    We are using a deterministic regression tree analysis model (DISTRIB) and a stochastic migration model (SHIFT) to examine potential distributions of ~66 individual species of eastern US trees under a 2 x CO2 climate change scenario. This process is demonstrated for Virginia pine (Pinus virginiana).

  13. Extinction risk of Ipomopsis sancti-spiritus in Holy Ghost Canyon with and without management intervention

    Treesearch

    Joyce Maschinski

    2001-01-01

    Small populations are threatened with deterministic and stochastic events that can drive the number of individuals below a critical threshold for survival. Long-term studies allow us to increase our understanding of processes required for their conservation. In the past 7 years, the population of the federally endangered Holy Ghost ipomopsis (Ipomopsis sancti-spiritus...

  14. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  15. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  16. Who killed Laius?: On Sophocles' enigmatic message.

    PubMed

    Priel, Beatriz

    2002-04-01

    Using Laplanche's basic conceptualisation of the role of the other in unconscious processes, the author proposes a reading of Sophocles' tragedy, Oedipus the King, according to basic principles of dream interpretation. This reading corroborates contemporary literary perspectives suggesting that Sophocles' tragedy may not only convey the myth but also provide a critical analysis of how myths work. Important textual inconsistencies and incoherence, which have been noted through the centuries, suggest the existence of another, repressed story. Moreover, the action of the play points to enigmatic parental messages of infanticide and the silencing of Oedipus's story, as well as their translation into primordial guilt, as the origins of the tragic denouement. Oedipus's self-condemnation of parricide follows these enigmatic codes and is unrelated to, and may even contradict, the evidence offered in the tragedy as to the identity of Laius's murderers. Moreover, Sophocles' text provides a complex intertwining of hermeneutic and deterministic perspectives. Through the use of the mythical deterministic content, the formal characteristics of Sophocles' text, mainly its complex time perspective and extensive use of double meaning, dramatise in the act of reading an acute awareness of interpretation. This reading underscores the fundamental role of the other in the constitution of unconscious processes.

  17. Phenotypic switching of populations of cells in a stochastic environment

    NASA Astrophysics Data System (ADS)

    Hufton, Peter G.; Lin, Yen Ting; Galla, Tobias

    2018-02-01

    In biology phenotypic switching is a common bet-hedging strategy in the face of uncertain environmental conditions. Existing mathematical models often focus on periodically changing environments to determine the optimal phenotypic response. We focus on the case in which the environment switches randomly between discrete states. Starting from an individual-based model we derive stochastic differential equations to describe the dynamics, and obtain analytical expressions for the mean instantaneous growth rates based on the theory of piecewise-deterministic Markov processes. We show that optimal phenotypic responses are non-trivial for slow and intermediate environmental processes, and systematically compare the cases of periodic and random environments. The best response to random switching is more likely to be heterogeneity than in the case of deterministic periodic environments, net growth rates tend to be higher under stochastic environmental dynamics. The combined system of environment and population of cells can be interpreted as host-pathogen interaction, in which the host tries to choose environmental switching so as to minimise growth of the pathogen, and in which the pathogen employs a phenotypic switching optimised to increase its growth rate. We discuss the existence of Nash-like mutual best-response scenarios for such host-pathogen games.

  18. Deterministic quantum nonlinear optics with single atoms and virtual photons

    NASA Astrophysics Data System (ADS)

    Kockum, Anton Frisk; Miranowicz, Adam; Macrı, Vincenzo; Savasta, Salvatore; Nori, Franco

    2017-06-01

    We show how analogs of a large number of well-known nonlinear-optics phenomena can be realized with one or more two-level atoms coupled to one or more resonator modes. Through higher-order processes, where virtual photons are created and annihilated, an effective deterministic coupling between two states of such a system can be created. In this way, analogs of three-wave mixing, four-wave mixing, higher-harmonic and -subharmonic generation (i.e., up- and down-conversion), multiphoton absorption, parametric amplification, Raman and hyper-Raman scattering, the Kerr effect, and other nonlinear processes can be realized. In contrast to most conventional implementations of nonlinear optics, these analogs can reach unit efficiency, only use a minimal number of photons (they do not require any strong external drive), and do not require more than two atomic levels. The strength of the effective coupling in our proposed setups becomes weaker the more intermediate transition steps are needed. However, given the recent experimental progress in ultrastrong light-matter coupling and improvement of coherence times for engineered quantum systems, especially in the field of circuit quantum electrodynamics, we estimate that many of these nonlinear-optics analogs can be realized with currently available technology.

  19. A deterministic mathematical model for bidirectional excluded flow with Langmuir kinetics.

    PubMed

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2017-01-01

    In many important cellular processes, including mRNA translation, gene transcription, phosphotransfer, and intracellular transport, biological "particles" move along some kind of "tracks". The motion of these particles can be modeled as a one-dimensional movement along an ordered sequence of sites. The biological particles (e.g., ribosomes or RNAPs) have volume and cannot surpass one another. In some cases, there is a preferred direction of movement along the track, but in general the movement may be bidirectional, and furthermore the particles may attach or detach from various regions along the tracks. We derive a new deterministic mathematical model for such transport phenomena that may be interpreted as a dynamic mean-field approximation of an important model from mechanical statistics called the asymmetric simple exclusion process (ASEP) with Langmuir kinetics. Using tools from the theory of monotone dynamical systems and contraction theory we show that the model admits a unique steady-state, and that every solution converges to this steady-state. Furthermore, we show that the model entrains (or phase locks) to periodic excitations in any of its forward, backward, attachment, or detachment rates. We demonstrate an application of this phenomenological transport model for analyzing ribosome drop off in mRNA translation.

  20. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  1. Teaching the geological subsurface with 3D models

    NASA Astrophysics Data System (ADS)

    Thorpe, Steve; Ward, Emma

    2014-05-01

    3D geological models have great potential as a resource when teaching geological concepts as it allows the student to visualise and interrogate UK geology. They are especially useful when dealing with the conversion of 2D field, map and GIS outputs into three dimensional geological units, which is a common problem for many students. Today's earth science students use a variety of skills and processes during their learning experience including spatial thinking, image construction, detecting patterns, making predictions and deducing the orientation of themselves. 3D geological models can reinforce spatial thinking strategies and encourage students to think about processes and properties, in turn helping the student to recognise pre-learnt geological principles in the field and to convert what they see at the surface into a picture of what is going on at depth. The British Geological Survey (BGS) has been producing digital 3D geological models for over 10 years. The models produced are revolutionising the working practices, data standards and products of the BGS. Sharing our geoscience information with academia is highlighted throughout the BGS strategy as is instilling practical skills in future geoscience professionals, such as model building and interpretation. In 2009 a project was launched to investigate the potential of the models as a teaching resource. The study included justifying if and how the models help students to learn, how models have been used historically, and how other forms of modelling are being used today. BGS now produce 3D geological models for use by anyone teaching or learning geoscience. They incorporate educational strategies that will develop geospatial skills and alleviate potential problems that some students experience. They are contained within contemporary case studies and show standard geological concepts, structures, sedimentary rocks, cross sections and field techniques. 3D geological models of the Isle of Wight and Ingleborough along with accompanying education material and a video tutorial guide are currently available to the public on our website www.bgs.ac.uk. 2014 will see the launch of a further 5-6 models, each illustrating different geological locations, rock types and complexities. This poster aims to show the methodology and techniques for generating a 3D geological model. It will provide background information on the project and how these models can be used as a teaching resource, either in a formal classroom setting or as a distance learning tool. The model allows the student to take part in virtual fieldwork, by viewing the landscape in association with the geological structures and processes that have shaped it.

  2. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  3. 77 FR 11565 - Agency Information Collection: Comment Request AGENCY: United States Geological Survey (USGS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-27

    ... CONTACT: Douglas A. Howard, Associate Program Coordinator NCGMP (STATEMAP and EDMAP), USGS Geological... colleges and universities in the United States and Puerto Rico through an annual competitive cooperative agreement process. Every federal dollar that is awarded is matched with university funds. Geology professors...

  4. A Writing Template for Probing Students' Geological Sense of Place

    ERIC Educational Resources Information Center

    Clary, Renee M.; Wandersee, James H.

    2006-01-01

    Because many incoming geoscience students did not acknowledge their previous personal encounters with the earth's geological processes or products, we developed the Geological Sense of Place (GSP) template as a convenient way to assess students' earth science backgrounds through short answer, mini-essay, and induced associative responses. The GSP…

  5. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  6. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  7. Efficient Geological Modelling of Large AEM Surveys

    NASA Astrophysics Data System (ADS)

    Bach, Torben; Martlev Pallesen, Tom; Jørgensen, Flemming; Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas

    2014-05-01

    Combining geological expert knowledge with geophysical observations into a final 3D geological model is, in most cases, not a straight forward process. It typically involves many types of data and requires both an understanding of the data and the geological target. When dealing with very large areas, such as modelling of large AEM surveys, the manual task for the geologist to correctly evaluate and properly utilise all the data available in the survey area, becomes overwhelming. In the ERGO project (Efficient High-Resolution Geological Modelling) we address these issues and propose a new modelling methodology enabling fast and consistent modelling of very large areas. The vision of the project is to build a user friendly expert system that enables the combination of very large amounts of geological and geophysical data with geological expert knowledge. This is done in an "auto-pilot" type functionality, named Smart Interpretation, designed to aid the geologist in the interpretation process. The core of the expert system is a statistical model that describes the relation between data and geological interpretation made by a geological expert. This facilitates fast and consistent modelling of very large areas. It will enable the construction of models with high resolution as the system will "learn" the geology of an area directly from interpretations made by a geological expert, and instantly apply it to all hard data in the survey area, ensuring the utilisation of all the data available in the geological model. Another feature is that the statistical model the system creates for one area can be used in another area with similar data and geology. This feature can be useful as an aid to an untrained geologist to build a geological model, guided by the experienced geologist way of interpretation, as quantified by the expert system in the core statistical model. In this project presentation we provide some examples of the problems we are aiming to address in the project, and show some preliminary results.

  8. Metamorphic geology: Why should we care?

    NASA Astrophysics Data System (ADS)

    Tajcmanova, Lucie; Moulas, Evangelos; Vrijmoed, Johannes

    2016-04-01

    Estimation of pressure-temperature (P-T) from petrographic observations in metamorphic rocks has become a common practice in petrology studies during the last 50 years. This data then often serves as a key input in geodynamic reconstructions and thus directly influences our understanding of lithospheric processes. Such an approach might have led the metamorphic geology field to a certain level of quiescence. Obtaining high-quality analytical data from metamorphic rocks has become a standard part of geology studies. The numerical tools for geodynamic reconstructions have evolved to a great extend as well. Furthermore, the increasing demand on using the Earth's interior for sustainable energy or nuclear waste disposal requires a better understanding of the physical processes involved in fluid-rock interaction. However, nowadays, metamorphic data have apparently lost their importance in the "bigger picture" of the Earth sciences. Interestingly, the suppression of the metamorphic geology discipline limits the potential for understanding the aforementioned physical processes that could have been exploited. In fact, those phenomena must be considered in the development of new generations of fully coupled numerical codes that involve reacting materials with changing porosity while obeying conservation of mass, momentum and energy. In our contribution, we would like to discuss the current role of metamorphic geology. We will bring food for thoughts and specifically touch upon the following questions: How can we revitalize metamorphic geology? How can we increase the importance of it? How can metamorphic geology contribute to societal issues?

  9. Ways forward in quantifying data uncertainty in geological databases

    NASA Astrophysics Data System (ADS)

    Kint, Lars; Chademenos, Vasileios; De Mol, Robin; Kapel, Michel; Lagring, Ruth; Stafleu, Jan; van Heteren, Sytze; Van Lancker, Vera

    2017-04-01

    Issues of compatibility of geological data resulting from the merging of many different data sources and time periods may jeopardize harmonization of data products. Important progress has been made due to increasing data standardization, e.g., at a European scale through the SeaDataNet and Geo-Seas data management infrastructures. Common geological data standards are unambiguously defined, avoiding semantic overlap in geological data and associated metadata. Quality flagging is also applied increasingly, though ways in further propagating this information in data products is still at its infancy. For the Belgian and southern Netherlands part of the North Sea, databases are now rigorously re-analyzed in view of quantifying quality flags in terms of uncertainty to be propagated through a 3D voxel model of the subsurface (https://odnature.naturalsciences.be/tiles/). An approach is worked out to consistently account for differences in positioning, sampling gear, analysis procedures and vintage. The flag scaling is used in the interpolation process of geological data, but will also be used when visualizing the suitability of geological resources in a decision support system. Expert knowledge is systematically revisited as to avoid totally inappropriate use of the flag scaling process. The quality flagging is also important when communicating results to end-users. Therefore, an open data policy in combination with several processing tools will be at the heart of a new Belgian geological data portal as a platform for knowledge building (KB) and knowledge management (KM) serving the marine geoscience, the policy community and the public at large.

  10. Geomorphic effectiveness of a long profile shape and the role of inherent geological controls in the Himalayan hinterland area of the Ganga River basin, India

    NASA Astrophysics Data System (ADS)

    Sonam; Jain, Vikrant

    2018-03-01

    Long profiles of rivers provide a platform to analyse interaction between geological and geomorphic processes operating at different time scales. Identification of an appropriate model for river long profile becomes important in order to establish a quantitative relationship between the profile shape, its geomorphic effectiveness, and inherent geological characteristics. This work highlights the variability in the long profile shape of the Ganga River and its major tributaries, its impact on stream power distribution pattern, and role of the geological controls on it. Long profile shapes are represented by the sum of two exponential functions through the curve fitting method. We have shown that coefficients of river long profile equations are governed by the geological characteristics of subbasins. These equations further define the spatial distribution pattern of stream power and help to understand stream power variability in different geological terrains. Spatial distribution of stream power in different geological terrains successfully explains spatial variability in geomorphic processes within the Himalayan hinterland area. In general, the stream power peaks of larger rivers lie in the Higher Himalaya, and rivers in the eastern hinterland area are characterised by the highest magnitude of stream power.

  11. Geoscientific Mapping of Vesta by the Dawn Mission

    NASA Technical Reports Server (NTRS)

    Jaumann, R.; Pieters, C. M.; Neukum, G.; Mottola, S.; DeSanctis, M. C.; Russell, C. T.; Raymond, C. A.; McSween, H. Y.; Roatsch, T.; Nathues, A.; hide

    2011-01-01

    The geologic objectives of the Dawn Mission are to derive Vesta's shape, map the surface geology, understand the geological context and contribute to the determination of the asteroids' origin and evolution. Geomorphology and distribution of surface features will provide evidence for impact cratering, tectonic activity, volcanism, and regolith processes. Spectral measurements of the surface will provide evidence of the compositional characteristics of geological units. Age information, as derived from crater size-frequency distributions, provides the stratigraphic context for the structural and compositional mapping results into the stratigraphic context and thusrevealing the geologic history of Vesta.

  12. Taking geoscience to the IMAX: 3D and 4D insight into geological processes using micro-CT

    NASA Astrophysics Data System (ADS)

    Dobson, Katherine; Dingwell, Don; Hess, Kai-Uwe; Withers, Philip; Lee, Peter; Pistone, Mattia; Fife, Julie; Atwood, Robert

    2015-04-01

    Geology is inherently dynamic, and full understanding of any geological system can only be achieved by considering the processes by which change occurs. Analytical limitations mean understanding has largely developed from ex situ analyses of the products of geological change, rather than of the processes themselves. Most methods essentially utilise "snap shot" sampling: and from thin section petrography to high resolution crystal chemical stratigraphy and field volcanology, we capture an incomplete view of a spatially and temporally variable system. Even with detailed experimental work, we can usually only analyse samples before and after we perform an experiment, as routine analysis methods are destructive. Serial sectioning and quenched experiments stopped at different stages can give some insight into the third and fourth dimension, but the true scaling of the processes from the laboratory to the 4D (3D + time) geosphere is still poorly understood. Micro computed tomography (XMT) can visualise the internal structures and spatial associations within geological samples non-destructively. With image resolutions of between 200 microns and 50 nanometres, tomography has the ability to provide a detailed sample assessment in 3D, and quantification of mineral associations, porosity, grain orientations, fracture alignments and many other features. This allows better understanding of the role of the complex geometries and associations within the samples, but the challenge of capturing the processes that generate and modify these structures remains. To capture processes, recent work has focused on developing experimental capability for in situ experiments on geological materials. Data presented will showcase examples from recent experiments where high speed synchrotron x-ray tomography has been used to acquire each 3D image in under 2 seconds. We present a suite of studies that showcase how it is now possible to take quantification of many geological processed into 3D and 4D. This will include tracking the interactions between bubbles and crystals in a deforming magma, the dissolution of individual mineral grains from low grade ores, and quantification of three phase flow in sediments and soils. Our aim is to demonstrate how XMT can provide new insight into dynamic processes in all geoscience disciplines, and give you some insight into where 4D geoscience could take us next.

  13. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  14. Introduction of Virtualization Technology to Multi-Process Model Checking

    NASA Technical Reports Server (NTRS)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  15. Controlled deterministic implantation by nanostencil lithography at the limit of ion-aperture straggling

    NASA Astrophysics Data System (ADS)

    Alves, A. D. C.; Newnham, J.; van Donkelaar, J. A.; Rubanov, S.; McCallum, J. C.; Jamieson, D. N.

    2013-04-01

    Solid state electronic devices fabricated in silicon employ many ion implantation steps in their fabrication. In nanoscale devices deterministic implants of dopant atoms with high spatial precision will be needed to overcome problems with statistical variations in device characteristics and to open new functionalities based on controlled quantum states of single atoms. However, to deterministically place a dopant atom with the required precision is a significant technological challenge. Here we address this challenge with a strategy based on stepped nanostencil lithography for the construction of arrays of single implanted atoms. We address the limit on spatial precision imposed by ion straggling in the nanostencil—fabricated with the readily available focused ion beam milling technique followed by Pt deposition. Two nanostencils have been fabricated; a 60 nm wide aperture in a 3 μm thick Si cantilever and a 30 nm wide aperture in a 200 nm thick Si3N4 membrane. The 30 nm wide aperture demonstrates the fabricating process for sub-50 nm apertures while the 60 nm aperture was characterized with 500 keV He+ ion forward scattering to measure the effect of ion straggling in the collimator and deduce a model for its internal structure using the GEANT4 ion transport code. This model is then applied to simulate collimation of a 14 keV P+ ion beam in a 200 nm thick Si3N4 membrane nanostencil suitable for the implantation of donors in silicon. We simulate collimating apertures with widths in the range of 10-50 nm because we expect the onset of J-coupling in a device with 30 nm donor spacing. We find that straggling in the nanostencil produces mis-located implanted ions with a probability between 0.001 and 0.08 depending on the internal collimator profile and the alignment with the beam direction. This result is favourable for the rapid prototyping of a proof-of-principle device containing multiple deterministically implanted dopants.

  16. δ-exceedance records and random adaptive walks

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2016-08-01

    We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.

  17. Community assembly processes underlying phytoplankton and bacterioplankton across a hydrologic change in a human-impacted river.

    PubMed

    Isabwe, Alain; Yang, Jun R; Wang, Yongming; Liu, Lemian; Chen, Huihuang; Yang, Jun

    2018-07-15

    Although the influence of microbial community assembly processes on aquatic ecosystem function and biodiversity is well known, the processes that govern planktonic communities in human-impacted rivers remain largely unstudied. Here, we used multivariate statistics and a null model approach to test the hypothesis that environmental conditions and obstructed dispersal opportunities, dictate a deterministic community assembly for phytoplankton and bacterioplankton across contrasting hydrographic conditions in a subtropical mid-sized river (Jiulong River, southeast China). Variation partitioning analysis showed that the explanatory power of local environmental variables was larger than that of the spatial variables for both plankton communities during the dry season. During the wet season, phytoplankton community variation was mainly explained by local environmental variables, whereas the variance in bacterioplankton was explained by both environmental and spatial predictors. The null model based on Raup-Crick coefficients for both planktonic groups suggested little evidences of the stochastic processes involving dispersal and random distribution. Our results showed that hydrological change and landscape structure act together to cause divergence in communities along the river channel, thereby dictating a deterministic assembly and that selection exceeds dispersal limitation during the dry season. Therefore, to protect the ecological integrity of human-impacted rivers, watershed managers should not only consider local environmental conditions but also dispersal routes to account for the effect of regional species pool on local communities. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Grinding aspheric and freeform micro-optical molds

    NASA Astrophysics Data System (ADS)

    Tohme, Yazid E.

    2007-02-01

    Fueled by the need for better performing optics, glass optics are now replacing plastic optics in many industrial and consumer electronic devices. One of these devices is the mobile phone camera. The optical sub-assembly in a mobile phone includes several micro lenses that are spherical and/or aspherical in shape and require form tolerances in the submicron range. These micro glass lenses are mass produced by a replication process known as glass press molding. The process entails the compression of a glass gob between two precise optical quality molds at an elevated temperature, usually near the transition temperature of the glass material. The elevated forces and temperatures required in the glass molding process limits the materials of the molds to very tough materials such as tungsten carbide or silicon carbide. These materials can withstand large pressing forces at high temperatures without any significant deformation. These materials offer great mechanical properties for glass press molding but they are also a challenge to machine to submicron accuracy. The work in this paper discusses a deterministic micro grinding manufacturing process referred to as wheel normal grinding, which is utilized to produce these optical quality molds. Wheel normal grinding is more accurate and more deterministic than most other grinding techniques and can produce molds to the form and finish tolerances required for optical molding. This method relies on the ability to recognize and compensate for grinding wheel wear and machine repeatable errors. Results will be presented to illustrate the accuracy of this micro grinding technique.

  19. Chemical, thermal and impact processing of asteroids

    NASA Technical Reports Server (NTRS)

    Scott, E. R. D.; Taylor, G. J.; Newsom, H. E.; Herbert, F.; Zolensky, M.

    1989-01-01

    The geological effects of impacts, heating, melting, core formation, and aqueous alteration on asteroids are reviewed. A review of possible heat sources appears to favor an important role for electrical induction heating. The effects of each geologic process acting individually and in combination with others, are considered; it is concluded that there is much evidence for impacts during alteration, metamorphism and melting. These interactions vastly increased the geologic diversity of the asteroid belt. Subsequent impacts of cool asteroids did not reduce this diversity. Instead new rock types were created by mixing, brecciation and minor melting.

  20. State geological surveys: Their growing national role in policy

    USGS Publications Warehouse

    Gerhard, L.C.

    2000-01-01

    State geological surveys vary in organizational structure, but are political powers in the field of geology by virtue of their intimate knowledge of and involvement in legislative and political processes. Origins of state geological surveys lie in the recognition of society that settlement and prosperity depended on access to a variety of natural resources, resources that are most familiar to geologists. As the surveys adapt to modern societal pressures, making geology serve the public has become the new mission for many state geological surveys. Geologic mapping was the foundation of most early surveys, and the state surveys have brought mapping back into the public realm to meet today's challenges of growing population density, living environment desires, and resource access.

  1. Corrections to chance fluctuations: quantum mind in biological evolution?

    PubMed

    Damiani, Giuseppe

    2009-01-01

    According to neo-Darwinian theory, biological evolution is produced by natural selection of random hereditary variations. This assumption stems from the idea of a mechanical and deterministic world based on the laws of classic physics. However, the increased knowledge of relationships between metabolism, epigenetic systems, and editing of nucleic acids suggests the existence of self-organized processes of adaptive evolution in response to environmental stresses. Living organisms are open thermodynamic systems which use entropic decay of external source of electromagnetic energy to increase their internal dynamic order and to generate new genetic and epigenetic information with a high degree of coherency and teleonomic creativity. Sensing, information processing, and decision making of biological systems might be mainly quantum phenomena. Amplification of microscopic quantum events using the long-range correlation of fractal structures, at the borderline between deterministic order and unpredictable chaos, may be used to direct a reproducible transition of the biological systems towards a defined macroscopic state. The discoveries of many natural genetic engineering systems, the ability to choose the most effective solutions, and the emergence of complex forms of consciousness at different levels confirm the importance of mind-action directed processes in biological evolution, as suggested by Alfred Russel Wallace. Although the main Darwinian principles will remain a crucial component of our understanding of evolution, a radical rethinking of the conceptual structure of the neo-Darwinian theory is needed.

  2. Basic Research Needs for Electrical Energy Storage. Report of the Basic Energy Sciences Workshop on Electrical Energy Storage, April 2-4, 2007

    DOE R&D Accomplishments Database

    Goodenough, J. B.; Abruna, H. D.; Buchanan, M. V.

    2007-04-04

    To identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems.

  3. Sedimentary exhalative (sedex) zinc-lead-silver deposit model

    USGS Publications Warehouse

    Emsbo, Poul; Seal, Robert R.; Breit, George N.; Diehl, Sharon F.; Shah, Anjana K.

    2016-10-28

    This report draws on previous syntheses and basic research studies of sedimentary exhalative (sedex) deposits to arrive at the defining criteria, both descriptive and genetic, for sedex-type deposits. Studies of the tectonic, sedimentary, and fluid evolution of modern and ancient sedimentary basins have also been used to select defining criteria. The focus here is on the geologic characteristics of sedex deposit-hosting basins that contain greater than 10 million metric tons of zinc and lead. The enormous size of sedex deposits strongly suggests that basin-scale geologic processes are involved in their formation. It follows that mass balance constraints of basinal processes can provide a conceptual underpinning for the evaluation of potential ore-forming mechanisms and the identification of geologic indicators for ore potential in specific sedimentary basins. Empirical data and a genetic understanding of the physicochemical, geologic, and mass balance conditions required for each of these elements are used to establish a hierarchy of quantifiable geologic criteria that can be used in U.S. Geological Survey national assessments.  In addition, this report also provides a comprehensive evaluation of environmental considerations associated with the mining of sedex deposits.

  4. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-08-23

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, H.

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications ofmore » the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability).« less

  6. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  7. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  8. The Importance of Mars Samples in Constraining the Geological and Geophysical Processes on Mars and the Nature of its Crust, Mantle, and Core

    NASA Astrophysics Data System (ADS)

    iMOST Team; Herd, C. D. K.; Ammannito, E.; Anand, M.; Debaille, V.; Hallis, L. J.; McCubbin, F. M.; Schmitz, N.; Usui, T.; Weiss, B. P.; Altieri, F.; Amelin, Y.; Beaty, D. W.; Benning, L. G.; Bishop, J. L.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Campbell, K. A.; Carrier, B. L.; Czaja, A. D.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Farmer, J. D.; Fernandez-Remolar, D. C.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Harrington, A. D.; Hausrath, E. M.; Horgan, B.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mangold, N.; Mackelprang, R.; Mayhew, L. E.; McCoy, J. T.; McLennan, S. M.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Ori, G. G.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Tosca, N. J.; Van Kranendonk, M. J.; Wadhwa, M.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.

    2018-04-01

    We present the main sample types from any potential Mars Sample Return landing site that would be required to constrain the geological and geophysical processes on Mars, including the origin and nature of its crust, mantle, and core.

  9. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    NASA Astrophysics Data System (ADS)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  10. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, M.E.; Wilson, M.L.; Wightman, J.

    1996-12-31

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based onmore » marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.« less

  11. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, M.E.; Wilson, M.L.; Wightman, J.

    1996-01-01

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on markermore » correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.« less

  12. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  13. Mississippi River delta plain, Louisiana coast, and inner shelf Holocene geologic framework, processes, and resources

    USGS Publications Warehouse

    Williams, S. Jeffress; Kulp, Mark; Penland, Shea; Kindinger, Jack L.; Flocks, James G.; Buster, Noreen A.; Holmes, Charles W.

    2009-01-01

    Extending nearly 400 km from Sabine Pass on the Texas-Louisiana border east to the Chandeleur Islands, the Louisiana coastal zone (Fig. 11.1) along the north-central Gulf of Mexico is the southern terminus of the largest drainage basin in North America (>3.3 million km2), which includes the Mississippi River delta plain where approximately 6.2 million kilograms per year of sediment is delivered to the Gulf of Mexico (Coleman 1988). The Mississippi River, active since at least Late Jurassic time (Mann and Thomas 1968), is the main distributary channel of this drainage system and during the Holocene has constructed one of the largest delta plains in the world, larger than 30,000 km2 (Coleman and Prior 1980; Coleman 1981; Coleman et al. 1998). The subsurface geology and geomorphology of the Louisiana coastal zone reffects a complex history of regional tectonic events and fluvial, deltaic, and marine sedimentary processes affected by large sea-level fluctuations. Despite the complex geology of the north-central Gulf basin, a long history of engineering studies and Scientific research investigations (see table 11.1) has led to substantial knowledge of the geologic framework and evolution of the delta plain region (see also Bird et al., chapter 1 in this volume). Mississippi River delta plain, Louisiana coast, and inner shelf Holocene geologic framework, processes, and resources. Available from: https://www.researchgate.net/publication/262802561_Mississippi_River_delta_plain_Louisiana_coast_and_inner_shelf_Holocene_geologic_framework_processes_and_resources [accessed Sep 13, 2017].

  14. Explanations for adaptations, just-so stories, and limitations on evidence in evolutionary biology.

    PubMed

    Smith, Richard J

    2016-11-01

    Explanations of the historical origin of specific individual traits are a key part of the research program in paleontology and evolutionary biology. Why did bipedalism evolve in the human lineage? Why did some dinosaurs and related species have head crests? Why did viviparity evolve in some reptiles? Why did the common ancestor of primates evolve stereoscopic vision, grasping hands and feet, nails instead of claws, and large brains? These are difficult questions. To varying degrees, an explanation must grapple with (1) judgments about changes in fitness that might follow from a change in morphology - without actually observing behavior or measuring reproductive success, (2) the relationship between genes and traits, (3) limitations on doing relevant experiments, (4) the interpretation of causes that are almost certainly contingent, multifactorial, interactive, hierarchical, nonlinear, emergent, and probabilistic rather than deterministic, (5) limited information about variation and ontogeny, (6) a dataset based on the random fortunes of the historical record, including only partial hard-tissue morphology and no soft-tissue morphology, (7) an equally partial and problematic (for example, time-averaged) record of the environment, (8) the compression of all data into a geological time scale that is likely to miss biologically important events or fluctuations, (9) dependence on a process that can only be inferred ("form and even behavior may leave fossil traces, but forces like natural selection do not", 1:130 ) and finally, (10) the assumption of the "adaptationist programme" 2 that the trait in question is in fact an adaptation rather than a consequence of genetic drift, correlated evolution, pleiotropy, exaptation, or other mechanisms. © 2016 Wiley Periodicals, Inc.

  15. End of inevitability: programming and reprogramming.

    PubMed

    Turksen, Kursad

    2013-08-01

    Stem cell commitment and differentiation leading to functional cell types and organs has generally been considered unidirectional and deterministic. Starting first with a landmark study 50 years ago, and now with more recent observations, this paradigm has been challenged, necessitating a rethink of what constitutes both programming and reprogramming processes, and how we can use this new understanding for new approaches to drug discovery and regenerative medicine.

  16. Enhancements and Algorithms for Avionic Information Processing System Design Methodology.

    DTIC Science & Technology

    1982-06-16

    programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP

  17. Impact, and its implications for geology

    NASA Technical Reports Server (NTRS)

    Marvin, Ursula B.

    1988-01-01

    The publication of seminal texts on geology and on meteoritics in the 1790s, laid the groundwork for the emergence of each discipline as a modern branch of science. Within the past three decades, impact cratering has become universally accepted as a process that sculptures the surfaces of planets and satellites throughout the solar system. Nevertheless, one finds in-depth discussions of impact processes mainly in books on the Moon or in surveys of the Solar System. The historical source of the separation between meteoritics and geology is easy to identify. It began with Hutton. Meteorite impact is an extraordinary event acting instantaneously from outside the Earth. It violates Hutton's principles, which were enlarged upon and firmly established as fundamental to the geological sciences by Lyell. The split between meteoritics and geology surely would have healed as early as 1892 if the investigations conducted by Gilbert (1843-1918) at the crater in northern Arizona had yielded convincing evidence of meteorite impact. The 1950s and 1960s saw a burgeoning of interest in impact processes. The same period witnessed the so-called revolution in the Earth Sciences, when geologists yielded up the idea of fixed continents and began to view the Earth's lithosphere as a dynamic array of horizontally moving plates. Plate tectonics, however, is fully consistent with the geological concepts inherited from Hutton: the plates slowly split, slide, and suture, driven by forces intrinsic to the globe.

  18. The STRATAFORM Project: U.S. Geological Survey geotechnical studies

    USGS Publications Warehouse

    Minasian, Diane L.; Lee, Homa J.; Locat, Jaques; Orzech, Kevin M.; Martz, Gregory R.; Israel, Kenneth

    2001-01-01

    This report presents physical property logs of core samples from an offshore area near Eureka, CA. The cores were obtained as part of the STRATAFORM Program (Nittrouer and Kravitz, 1995, 1996), a study investigating how present sedimentation and sediment transport processes influence long-term stratigraphic sequences preserved in the geologic record. The core samples were collected during four separate research cruises to the northern California study area, and data shown in the logs of the cores were collected using a multi-sensor whole core logger. The physical properties collected are useful in identifying stratigraphic units, ground-truthing acoustic imagery and sub-bottom profiles, and in understanding mass movement processes. STRATA FORmation on Margins was initiated in 1994 by the Office of Naval Research, Marine Geology and Geophysics Department as a coordinated multi-investigator study of continental-margin sediment transport processes and stratigraphy (Nittrouer and Kravitz, 1996). The program is investigating the stratigraphic signature of the shelf and slope parts of the continental margins, and is designed to provide a better understanding of the sedimentary record and a better prediction of strata. Specifically, the goals of the STRATAFORM Program are to (Nittrouer and Kravitz, 1995): - determine the geological relevance of short-term physical processes that erode, transport, and deposit particles and those processes that subsequently rework the seabed over time scales - improve capabilities for identifying the processes that form the strata observed within the upper ~100 m of the seabed commonly representing 104-106 years of sedimentation. - synthesize this knowledge and bridge the gap between time scales of sedimentary processes and those of sequence stratigraphy. The STRATAFORM Program is divided into studies of the continental shelf and the continental slope; the geotechnical group within the U.S. Geological Survey provides support to both parts of the project.

  19. Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Johnson, Seth R.; Remec, Igor

    2015-01-01

    Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less

  20. USGS Western Coastal and Marine Geology Team

    USGS Publications Warehouse

    Johnson, Sam; Gibbons, Helen

    2007-01-01

    The Western Coastal and Marine Geology Team of the U.S. Geological Survey (USGS) studies the coasts of the western United States, including Alaska and Hawai‘i. Team scientists conduct research, monitor processes, and develop information about coastal and marine geologic hazards, environmental conditions, habitats, and energy and mineral resources. This information helps managers at all levels of government and in the private sector make informed decisions about the use and protection of national coastal and marine resources.

  1. Distribution and interplay of geologic processes on Titan from Cassini radar data

    USGS Publications Warehouse

    Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.

    2010-01-01

    The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ???350 m to ???2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30??), with no dunes being present above 60??. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30?? and 60?? north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the most recent, while tectonic processes that led to the formation of mountains and Xanadu are likely the most ancient. ?? 2009 Elsevier Inc.

  2. Disribution and interplay of geologic processes on Titan from Cassini radar data

    USGS Publications Warehouse

    Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.

    2010-01-01

    The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ~350 m to ~2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30 degrees), with no dunes being present above 60 degrees. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30 degrees and 60 degrees north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the most recent, while tectonic processes that led to the formation of mountains and Xanadu are likely the most ancient.

  3. Pro Free Will Priming Enhances “Risk-Taking” Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies

    PubMed Central

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854

  4. Pro Free Will Priming Enhances "Risk-Taking" Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies.

    PubMed

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.

  5. Conduct of Geologic Field Work During Planetary Exploration: Why Geology Matters

    NASA Technical Reports Server (NTRS)

    Eppler, Dean B.

    2010-01-01

    The science of field geology is the investigative process of determining the distribution of rock units and structures on a planet fs surface, and it is the first-order data set that informs all subsequent studies of a planet, such as geochemistry, geochronology, geophysics, or remote sensing. For future missions to the Moon and Mars, the surface systems deployed must support the conduct of field geology if these endeavors are to be scientifically useful. This lecture discussed what field geology is all about.why it is important, how it is done, how conducting field geology informs many other sciences, and how it affects the design of surface systems and the implementation of operations in the future.

  6. A bibliography of planetary geology principal investigators and their associates, 1981 - 1982

    NASA Technical Reports Server (NTRS)

    Plescia, J. B. (Compiler)

    1982-01-01

    Over 800 publications submitted by researchers supported through NASA's Planetary Geology Program are cited and an author/editor index is provided. Entries are listed under the following subjects: (1) general interest topics; (2) solar system, comets, asteroids, and small bodies; (3) geologic mapping, geomorphology, and stratigraphy; (4) structure, tectonics, geologic and geophysical evolution; (5) impact craters: morphology, density, and geologic studies; (6) volcanism; (7) fluvial, mass wasting, and periglacial processes; (8) Eolian studies; (9) regolith, volatile, atmosphere, and climate; (10) remote sensing, radar, and photometry; and (11) cartography, photogrammetry, geodesy, and altimetry.

  7. Blue Marble Matches: Using Earth for Planetary Comparisons

    NASA Technical Reports Server (NTRS)

    Graff, Paige Valderrama

    2009-01-01

    Goal: This activity is designed to introduce students to geologic processes on Earth and model how scientists use Earth to gain a better understanding of other planetary bodies in the solar system. Objectives: Students will: 1. Identify common descriptor characteristics used by scientists to describe geologic features in images. 2. Identify geologic features and how they form on Earth. 3. Create a list of defining/distinguishing characteristics of geologic features 4. Identify geologic features in images of other planetary bodies. 5. List observations and interpretations about planetary body comparisons. 6. Create summary statements about planetary body comparisons.

  8. An in-situ stimulation experiment in crystalline rock - assessment of induced seismicity levels during stimulation and related hazard for nearby infrastructure

    NASA Astrophysics Data System (ADS)

    Gischig, Valentin; Broccardo, Marco; Amann, Florian; Jalali, Mohammadreza; Esposito, Simona; Krietsch, Hannes; Doetsch, Joseph; Madonna, Claudio; Wiemer, Stefan; Loew, Simon; Giardini, Domenico

    2016-04-01

    A decameter in-situ stimulation experiment is currently being performed at the Grimsel Test Site in Switzerland by the Swiss Competence Center for Energy Research - Supply of Electricity (SCCER-SoE). The underground research laboratory lies in crystalline rock at a depth of 480 m, and exhibits well-documented geology that is presenting some analogies with the crystalline basement targeted for the exploitation of deep geothermal energy resources in Switzerland. The goal is to perform a series of stimulation experiments spanning from hydraulic fracturing to controlled fault-slip experiments in an experimental volume approximately 30 m in diameter. The experiments will contribute to a better understanding of hydro-mechanical phenomena and induced seismicity associated with high-pressure fluid injections. Comprehensive monitoring during stimulation will include observation of injection rate and pressure, pressure propagation in the reservoir, permeability enhancement, 3D dislocation along the faults, rock mass deformation near the fault zone, as well as micro-seismicity. The experimental volume is surrounded by other in-situ experiments (at 50 to 500 m distance) and by infrastructure of the local hydropower company (at ~100 m to several kilometres distance). Although it is generally agreed among stakeholders related to the experiments that levels of induced seismicity may be low given the small total injection volumes of less than 1 m3, detailed analysis of the potential impact of the stimulation on other experiments and surrounding infrastructure is essential to ensure operational safety. In this contribution, we present a procedure how induced seismic hazard can be estimated for an experimental situation that is untypical for injection-induced seismicity in terms of injection volumes, injection depths and proximity to affected objects. Both, deterministic and probabilistic methods are employed to estimate that maximum possible and the maximum expected induced earthquake magnitude. Deterministic methods are based on McGarr's upper limit for the maximum induced seismic moment. Probabilistic methods rely on estimates of Shapiro's seismogenic index and seismicity rates from past stimulation experiments that are scaled to injection volumes of interest. Using rate-and-state frictional modelling coupled to a hydro-mechanical fracture flow model, we demonstrate that large uncontrolled rupture events are unlikely to occur and that deterministic upper limits may be sufficiently conservative. The proposed workflow can be applied to similar injection experiments, for which hazard to nearby infrastructure may limit experimental design.

  9. Ion implantation for deterministic single atom devices

    NASA Astrophysics Data System (ADS)

    Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.

    2017-12-01

    We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  10. Counterfactual Quantum Deterministic Key Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Wang, Jian; Tang, Chao-Jing

    2013-01-01

    We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.

  11. Ion implantation for deterministic single atom devices

    DOE PAGES

    Pacheco, J. L.; Singh, M.; Perry, D. L.; ...

    2017-12-04

    Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  12. Geology of the Woods Hole area, Massachusetts; the story behind the landscape

    USGS Publications Warehouse

    Hutchinson, D.R.; Schwarzman, Beth

    2001-01-01

    The geologic story of the Woods Hole area, Cape Cod, Mass., was written by glacial ice during the last ice age and edited by the ocean waves. If you learn to read today's landscape, you can see the fascinating history it records. The features of Cape Cod, from the ponds and cranberry bogs to the gently sloping sandy uplands and rocky, irregular hills to the beaches, result from the glacial processes that built the cape and the marine processes that still shape it. Many geologists since the late 19th century have contributed to telling this story. The U.S. Geological Survey has studied the geology of Cape Cod in order to provide people with objective scientific data that can be applied to wise stewardship of the land and coasts.

  13. NONFUEL MINERAL RESOURCES OF THE PACIFIC EXCLUSIVE ECONOMIC ZONE.

    USGS Publications Warehouse

    Clague, David; Bischoff, James; Howell, David

    1984-01-01

    The Pacific Exclusive Economic Zone contains a variety of hard mineral resources. Sand and gravel and their associated placer deposits of heavy minerals are the most likely to be developed in the near future, but offshore and deep water deposits of phosphorite, abyssal manganese nodules, ferromanganese crusts enriched in cobalt, and massive sulfide deposits all represent future resources. The distribution, extent, and formation of these deposits are poorly understood and will be clarified only with additional exploration, framework geologic mapping, and study of the processes by which these resources form. It is pointed out that the initial discovery of most hard-mineral resources in the EEZ was made during routine scientific marine-geologic surveys aimed at understanding the framework geology and geologic processes of an offshore region.

  14. Proceedings of the Seventh International Conference on Mars

    NASA Technical Reports Server (NTRS)

    2007-01-01

    The oral and poster sessions of the SEVENTH INTERNATIONAL CONFERENCE ON MARS included; The Distribution and Context of Water-related Minerals on Mars; Poster Session: Mars Geology; Geology of the Martian Surface: Lithologic Variation, Composition, and Structure; Water Through Mars' Geologic History; Poster Session: Mars Water and the Martian Interior; Volatiles and Interior Evolution; The Martian Climate and Atmosphere: Variations in Time and Space; Poster Session: The Martian Climate and Current Processes; Modern Mars: Weather, Atmospheric Chemistry, Geologic Processes, and Water Cycle; Public Lecture: Mars Reconnaissance Orbiter's New View of the Red Planet; The North and South Polar Layered Deposits, Circumpolar Regions, and Changes with Time; Poster Session: Mars Polar Science, Astrobiology, Future Missions/Instruments, and Other Mars Science; Mars Astrobiology and Upcoming Missions; and Martian Stratigraphy and Sedimentology: Reading the Sedimentary Record.

  15. Front propagation and clustering in the stochastic nonlocal Fisher equation

    NASA Astrophysics Data System (ADS)

    Ganan, Yehuda A.; Kessler, David A.

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  16. Front propagation and clustering in the stochastic nonlocal Fisher equation.

    PubMed

    Ganan, Yehuda A; Kessler, David A

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  17. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  18. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  19. Expectancy Learning from Probabilistic Input by Infants

    PubMed Central

    Romberg, Alexa R.; Saffran, Jenny R.

    2013-01-01

    Across the first few years of life, infants readily extract many kinds of regularities from their environment, and this ability is thought to be central to development in a number of domains. Numerous studies have documented infants’ ability to recognize deterministic sequential patterns. However, little is known about the processes infants use to build and update representations of structure in time, and how infants represent patterns that are not completely predictable. The present study investigated how infants’ expectations fora simple structure develope over time, and how infants update their representations with new information. We measured 12-month-old infants’ anticipatory eye movements to targets that appeared in one of two possible locations. During the initial phase of the experiment, infants either saw targets that appeared consistently in the same location (Deterministic condition) or probabilistically in either location, with one side more frequent than the other (Probabilistic condition). After this initial divergent experience, both groups saw the same sequence of trials for the rest of the experiment. The results show that infants readily learn from both deterministic and probabilistic input, with infants in both conditions reliably predicting the most likely target location by the end of the experiment. Local context had a large influence on behavior: infants adjusted their predictions to reflect changes in the target location on the previous trial. This flexibility was particularly evident in infants with more variable prior experience (the Probabilistic condition). The results provide some of the first data showing how infants learn in real time. PMID:23439947

  20. Hydrogeology and simulation of source areas of water to production wells in a colluvium-mantled carbonate-bedrock aquifer near Shippensburg, Cumberland and Franklin Counties, Pennsylvania

    USGS Publications Warehouse

    Lindsey, Bruce D.

    2005-01-01

    This report presents the results of a study by the U.S. Geological Survey in cooperation with the Shippensburg Borough Authority to evaluate the source areas of water to production wells in a colluvium-mantled carbonate-bedrock aquifer in Cumberland and Franklin Counties, Pa. The areal extent of the zone of contribution was simulated for three production wells near Shippensburg, Pa. by use of a ground-water-flow model. A 111-square-mile area was selected as the model area and includes areas of the South Mountain Section and the Great Valley Section of the Valley and Ridge Physiographic Province. Within the model area, the geologic units in the South Mountain area are predominantly metamorphic rocks and the geologic units in the Great Valley are predominantly carbonate rocks. Hydrologic and geologic information were compiled to establish a conceptual model of ground-water flow. Characteristics of aquifer materials were determined, and streamflow and water levels were measured. Streamflow measurements in November 2003 showed all streams lost water as they flowed from South Mountain over the colluvium-mantled carbonate aquifer into the Great Valley. Some streams lost more than 1 cubic foot per second to the aquifer in this area. The Shippensburg Borough Authority owns three production wells in the model area. Two wells, Cu 969 and Fr 823, are currently (2004) used as production wells and produce 500,000 and 800,000 gallons per day, respectively. Well Cu 970 is intended to be brought on line as a production well in the future. Water levels were measured in 43 wells to use for model calibration. Water-level fluctuations and geophysical logs indicated confined conditions in well Cu 970. Ground-water flow was simulated with a model that consisted of two vertical layers, with five zones in each layer. The units were hydrostratigraphic units that initially were based on geologic formations, but boundaries were adjusted during model calibration. Model calibration resulted in a root mean square error of 9.8 feet. A parameter-estimation package was used during model calibration to estimate three parameters. The parameter estimation resulted in a value of 233 feet per day for horizontal hydraulic conductivity of the highly fractured carbonate rocks and sandy colluvium in layer 1; 3.97 feet per day for horizontal hydraulic conductivity of the ridge-forming unit in layer 1; and a value of 1.73 for horizontal anisotropy in both layers. The calibrated model was used to delineate the areal extent of the zone of contribution for wells Cu 969 and Fr 823. Although well Cu 970 is not currently (2004) being used, the areal extent of its zone of contribution also was simulated without additional model calibration. The shape of the areal extent of the zone of contribution was similar for each well and included an area that extended from the well southwest along the Tomstown Formation, and then extended southeast into the metamorphic rocks of South Mountain. The contributing areas from the watersheds of losing streams were also delineated because losing stream reaches bisect the areal extent of the zones of contribution. Spatial uncertainty of the areal extent of the zone of contribution was illustrated using a Monte-Carlo analysis. The model was run 1,000 times using randomly generated parameter sets that were normally distributed within the confidence interval around the optimal values for the three estimated parameters. The model converged and had a reasonable water budget for 980 of the model runs. For each of those 980 model runs, the recharge area was determined, and the results for all runs were compiled and contoured. The results of the Monte-Carlo analysis were compared to the results of the deterministic model, illustrating that the deterministic model has the greatest certainty in the area closest to each well in the Tomstown Formation. The areas farther from the well, upgradient, and in the metamorphic rocks have a higher degree

Top