Science.gov

Sample records for earth fault experiments

  1. Spacecraft fault tolerance: The Magellan experience

    NASA Technical Reports Server (NTRS)

    Kasuda, Rick; Packard, Donna Sexton

    1993-01-01

    Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.

  2. Attitude control fault protection - The Voyager experience

    NASA Technical Reports Server (NTRS)

    Litty, E. C.

    1980-01-01

    The length of the Voyager mission and the communication delay caused by the distances involved made fault protection a necessary part of the Voyager Attitude and Articulation Control Subsystem (AACS) design. An overview of the Voyager attitude control fault protection is given and flight experiences relating to fault protection are provided.

  3. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    NASA Astrophysics Data System (ADS)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  4. Earthquake Nucleation and Fault Slip: Possible Experiments on a Natural Fault

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Johnston, M. J.; Ebenhack, J.; Gwaba, D.

    2011-12-01

    High-resolution deformation and seismic observations are usually made only near the Earths' surface, kilometers away from where earthquake nucleate on active faults and are limited by inverse-cube-distance attenuation and ground noise. We have developed an experimental approach that aims at reactivating faults in-situ using thermal techniques and fluid injection, which modify in-situ stresses and the fault strength until the fault slips. Mines where in-situ stresses are sufficient to drive faulting present an opportunity to conduct such experiments. The former Homestake gold mine in South Dakota is a good example. During our recent field work in the Homestake mine, we found a large fault that intersects multiple mine levels. The size and distinct structure of this fault make it a promising target for in-situ reactivation, which would likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a dynamic earthquake rupture. Our analyses for the Homestake fault conditions indicate that this transition occurs for a patch size ~1 m. This represents a fundamental limitation for laboratory experiments and necessitates larger-scale field tests ~10-100 m. The opportunity to observe earthquake nucleation on the Homestake Fault is feasible because slip could be initiated at a pre-defined location and time with instrumentation placed as close as a few meters from the nucleation site. Designing the experiment requires a detailed assessment of the state-of-stress in the vicinity of the fault. This is being conducted by simulating changes in pore pressure and effective stresses accompanying dewatering of the mine, and by evaluating in-situ stress measurements in light of a regional stress field modified by local perturbations caused by the mine workings.

  5. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  6. A wideband magnetoresistive sensor for monitoring dynamic fault slip in laboratory fault friction experiments

    Kilgore, Brian D.

    2017-01-01

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  7. Geomorphic expression of strike-slip faults: field observations vs. analog experiments: preliminary results

    NASA Astrophysics Data System (ADS)

    Hsieh, S. Y.; Neubauer, F.; Genser, J.

    2012-04-01

    The aim of this project is to study the surface expression of strike-slip faults with main aim to find rules how these structures can be extrapolated to depth. In the first step, several basic properties of the fault architecture are in focus: (1) Is it possible to define the fault architecture by studying surface structures of the damage zone vs. the fault core, particularly the width of the damage zone? (2) Which second order structures define the damage zone of strike-slip faults, and how relate these to such reported in basement fault strike-slip analog experiments? (3) Beside classical fault bend structures, is there a systematic along-strike variation of the damage zone width and to which properties relates the variation of the damage zone width. We study the above mentioned properties on the dextral Altyn fault, which is one of the largest strike-slip on Earth with the advantage to have developed in a fully arid climate. The Altyn fault includes a ca. 250 to 600 m wide fault valley, usually with the trace of actual fault in its center. The fault valley is confined by basement highs, from which alluvial fans develop towards the center of the fault valley. The active fault trace is marked by small scale pressure ridges and offset of alluvial fans. The fault valley confining basement highs are several kilometer long and ca. 0.5 to 1 km wide and confined by rotated dextral anti-Riedel faults and internally structured by a regular fracture pattern. Dextral anti-Riedel faults are often cut by Riedel faults. Consequently, the Altyn fault comprises a several km wide damage zone. The fault core zone is a barrier to fluid flow, and the few springs of the region are located on the margin of the fault valley implying the fractured basement highs as the reservoir. Consequently, the southern Silk Road was using the Altyn fault valley. The preliminary data show that two or more orders of structures exist. Small-scale develop during a single earthquake. These finally

  8. A simulation of the San Andreas fault experiment

    NASA Technical Reports Server (NTRS)

    Agreen, R. W.; Smith, D. E.

    1973-01-01

    The San Andreas Fault Experiment, which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, was simulated for an eight year observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 kilometers apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C Spacecraft was simulated for these two stations during August and September for eight consecutive years. An error analysis of the recovery of the relative location of Quincy from the data was made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and laser systems range biases and noise. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 centimeters. This figure is based on the accuracy of earth models and other parameters available in 1972.

  9. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  10. A fault injection experiment using the AIRLAB Diagnostic Emulation Facility

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Mangum, Scott; Scheper, Charlotte

    1988-01-01

    The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.

  11. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  12. A Controllable Earthquake Rupture Experiment on the Homestake Fault

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Gwaba, D.; Elsworth, D.; Lowell, R. P.; Onstott, T. C.

    2010-12-01

    Fault-slip is typically simulated in the laboratory at the cm-to-dm scale. Laboratory results are then up-scaled by orders of magnitude to understand faulting and earthquakes processes. We suggest an experimental approach to reactivate faults in-situ at scales ~10-100 m using thermal techniques and fluid injection to modify in situ stresses and the fault strength to the point where the rock fails. Mines where the modified in-situ stresses are sufficient to drive faulting, present an opportunity to conduct such experiments. During our recent field work in the former Homestake gold mine in the northern Black Hills, South Dakota, we found a large fault present on multiple mine levels. The fault is subparallel to the local foliation in the Poorman formation, a Proterozoic metamorphic rock deformed into regional-scale folds with axes plunging ~40° to the SSE. The fault extends at least 1.5 km along strike and dip, with a center ~1.5 km deep. It strikes ~320-340° N, dips ~45-70° NE, and is recognized by a ~0.3-0.5 m thick distinct gouge that contains crushed host rock and black material that appears to be graphite. Although we could not find clear evidence for fault displacement, secondary features suggest that it is a normal fault. The size and distinct structure of this fault make it a promising target for in-situ experimentation of fault strength, hydrological properties, and slip nucleation processes. Most earthquakes are thought to be the result of unstable slip on existing faults, Activation of the Homestake fault in response to the controlled fluid injection and thermally changing background stresses is likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a small earthquake (dynamic) rupture. This controlled instability is intimately related to the dependence of the fault strength on the slip process and has been

  13. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  14. ``An Earth-Shaking Experience''

    NASA Astrophysics Data System (ADS)

    Achenbach, Joel

    2005-03-01

    Last month's annual meeting of the American Geophysical Union in San Francisco drew an estimated 11,000 scientists, teachers, journalists and geophysics groupies. The schedule of talks could be found in a bound volume as thick as a phone book. You never see a geophysicist in ordinary life, but apparently the world is crawling with them. They came to talk about everything from the ozone layer to the big wad of iron at the center of the Earth. Also about other planets. And magnetic fields. Solar wind. Water on Mars. To be at this convention was to be immersed to the eyebrows in scientific knowledge. It is intellectually fashionable to fetishize the unknown, but at AGU, a person will get the opposite feeling-that science is a voracious, relentless and tireless enterprise, and that soon there may not remain on this Earth an unturned stone.

  15. Earth radiation budget experiment software development

    NASA Technical Reports Server (NTRS)

    Edmonds, W. L.

    1985-01-01

    Computer programming and analysis efforts were carried out in support of the Earth Radiation Budget Experiment (ERBE) at NASA/Langley. The Earth Radiation Budget Experiment is described as well as data acquisition, analysis and modeling support for the testing of ERBE instruments. Also included are descriptions of the programs developed to analyze, format and display data collected during testing of the various ERBE instruments. Listings of the major programs developed under this contract are located in an appendix.

  16. The San Andreas fault experiment. [gross tectonic plates relative velocity

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Vonbun, F. O.

    1973-01-01

    A plan was developed during 1971 to determine gross tectonic plate motions along the San Andreas Fault System in California. Knowledge of the gross motion along the total fault system is an essential component in the construction of realistic deformation models of fault regions. Such mathematical models will be used in the future for studies which will eventually lead to prediction of major earthquakes. The main purpose of the experiment described is the determination of the relative velocity of the North American and the Pacific Plates. This motion being so extremely small, cannot be measured directly but can be deduced from distance measurements between points on opposite sites of the plate boundary taken over a number of years.

  17. Second generation experiments in fault tolerant software

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    The purpose of the Multi-Version Software (MVS) experiment is to obtain empirical measurements of the performance of multi-version systems. Twenty version of a program were prepared under reasonably realistic development conditions from the same specifications. The overall structure of the testing environment for the MVS experiment and its status are described. A preliminary version of the control system is described that was implemented for the MVS experiment to allow the experimenter to have control over the details of the testing. The results of an empirical study of error detection using self checks are also presented. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks.

  18. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  19. Absence of earthquake correlation with Earth tides: An indication of high preseismic fault stress rate

    Vidale, J.E.; Agnew, D.C.; Johnston, M.J.S.; Oppenheimer, D.H.

    1998-01-01

    Because the rate of stress change from the Earth tides exceeds that from tectonic stress accumulation, tidal triggering of earthquakes would be expected if the final hours of loading of the fault were at the tectonic rate and if rupture began soon after the achievement of a critical stress level. We analyze the tidal stresses and stress rates on the fault planes and at the times of 13,042 earthquakes which are so close to the San Andreas and Calaveras faults in California that we may take the fault plane to be known. We find that the stresses and stress rates from Earth tides at the times of earthquakes are distributed in the same way as tidal stresses and stress rates at random times. While the rate of earthquakes when the tidal stress promotes failure is 2% higher than when the stress does not, this difference in rate is not statistically significant. This lack of tidal triggering implies that preseismic stress rates in the nucleation zones of earthquakes are at least 0.15 bar/h just preceding seismic failure, much above the long-term tectonic stress rate of 10-4 bar/h.

  20. Response of faults to climate-driven changes in ice and water volumes on Earth's surface.

    PubMed

    Hampel, Andrea; Hetzel, Ralf; Maniatis, Georgios

    2010-05-28

    Numerical models including one or more faults in a rheologically stratified lithosphere show that climate-induced variations in ice and water volumes on Earth's surface considerably affect the slip evolution of both thrust and normal faults. In general, the slip rate and hence the seismicity of a fault decreases during loading and increases during unloading. Here, we present several case studies to show that a postglacial slip rate increase occurred on faults worldwide in regions where ice caps and lakes decayed at the end of the last glaciation. Of note is that the postglacial amplification of seismicity was not restricted to the areas beneath the large Laurentide and Fennoscandian ice sheets but also occurred in regions affected by smaller ice caps or lakes, e.g. the Basin-and-Range Province. Our results do not only have important consequences for the interpretation of palaeoseismological records from faults in these regions but also for the evaluation of the future seismicity in regions currently affected by deglaciation like Greenland and Antarctica: shrinkage of the modern ice sheets owing to global warming may ultimately lead to an increase in earthquake frequency in these regions.

  1. The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.

    NASA Astrophysics Data System (ADS)

    Thompson, T. B.; Meade, B. J.

    2017-12-01

    While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.

  2. A second generation experiment in fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Information was collected on the efficacy of fault-tolerant software by conducting two large-scale controlled experiments. In the first, an empirical study of multi-version software (MVS) was conducted. The second experiment is an empirical evaluation of self testing as a method of error detection (STED). The purpose ot the MVS experiment was to obtain empirical measurement of the performance of multi-version systems. Twenty versions of a program were prepared at four different sites under reasonably realistic development conditions from the same specifications. The purpose of the STED experiment was to obtain empirical measurements of the performance of assertions in error detection. Eight versions of a program were modified to include assertions at two different sites under controlled conditions. The overall structure of the testing environment for the MVS experiment and its status are described. Work to date in the STED experiment is also presented.

  3. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  4. Earth Observations: Experiences from Various Communication Strategies

    NASA Astrophysics Data System (ADS)

    Lilja Bye, Bente

    2015-04-01

    With Earth observations and the Group of Earth Observations as the common thread, a variety of communication strategies have been applied showcasing the use of Earth observations in geosciences such as climate change, natural hazards, hydrology and more. Based on the experiences from these communication strategies, using communication channels ranging from popular articles in established media, video production, event-based material and social media, lessons have been learned both with respect to the need of capacity, skills, networks, and resources. In general it is not difficult to mobilize geoscientists willing to spend some time on outreach activities. Time for preparing and training is however scarce among scientists. In addition, resources to cover the various aspects of professional science outreach is far from abundant. Among the challenges is the connection between the scientific networks and media channels. Social media competence and capacity are also issues that needs to be addressed more explicitly and efficiently. An overview of the experiences from several types of outreach activities will be given along with some input on possible steps towards improved communication strategies. Steady development of science communication strategies continuously integrating trainging of scientists in use of new outreach tools such as web technology and social innovations for more efficient use of limited resources will remain an issue for the scientific community.

  5. Transformation of fault slip modes in laboratory experiments

    NASA Astrophysics Data System (ADS)

    Martynov, Vasilii; Alexey, Ostapchuk; Markov, Vadim

    2017-04-01

    Slip mode of crust fault can vary because of many reasons. It's well known that fault structure, material of fault gouge, pore fluid et al. in many ways determines slip modes from creep and slow slip events to mega-earthquakes [1-3]. Therefore, the possibility of fault slip transformation due to external action is urgent question. There is popular and developing approach of fluid injection into central part of fault. The phenomenon of earthquakes induced due to pumping of water was investigated on small and large scales [4, 5]. In this work the laboratory experiments were conducted to study the evolution of the experimental fault slip when changing the properties of the interstitial fluid. The scheme of experiments is the classical slider-model set-up, in which the block under the shear force slips along the interface. In our experiments the plexiglas block 8x8x3 cm3 in size was put on the plexiglas base. The contact of the blocks was filled with a thin layer (about 3 mm thick) of a granular material. The normal load varied from 31 to 156 kPa. The shear load was applied through a spring with stiffness 60 kN/m, and the rate of spring deformation was 20 or 5 mcm/s. Two parameters were recorded during experiments: the shear force acting on the upper block (with an accuracy of 1 N) and its displacement relatively the base (with an accuracy of 0.1 μm). The gouge was composed of quartz sand (97.5%) and clay (2.5%). As a moisturizer were used different fluids with viscosity varying from 1 to 103 mPa x s. Different slip modes were simulated during slider-experiments. In our experiments slip mode is the act of instability manifested in an increase of slip velocity and a drop of shear stress acting on a movable block. The amplitude of a shear stress drop and the peak velocity of the upper block were chosen as the characteristics of the slip mode. In the laboratory experiments, slip events of one type can be achieved either as regularly recurring (regular mode) or as random

  6. Anisotropy of Earth's D'' layer and stacking faults in the MgSiO3 post-perovskite phase.

    PubMed

    Oganov, Artem R; Martonák, Roman; Laio, Alessandro; Raiteri, Paolo; Parrinello, Michele

    2005-12-22

    The post-perovskite phase of (Mg,Fe)SiO3 is believed to be the main mineral phase of the Earth's lowermost mantle (the D'' layer). Its properties explain numerous geophysical observations associated with this layer-for example, the D'' discontinuity, its topography and seismic anisotropy within the layer. Here we use a novel simulation technique, first-principles metadynamics, to identify a family of low-energy polytypic stacking-fault structures intermediate between the perovskite and post-perovskite phases. Metadynamics trajectories identify plane sliding involving the formation of stacking faults as the most favourable pathway for the phase transition, and as a likely mechanism for plastic deformation of perovskite and post-perovskite. In particular, the predicted slip planes are {010} for perovskite (consistent with experiment) and {110} for post-perovskite (in contrast to the previously expected {010} slip planes). Dominant slip planes define the lattice preferred orientation and elastic anisotropy of the texture. The {110} slip planes in post-perovskite require a much smaller degree of lattice preferred orientation to explain geophysical observations of shear-wave anisotropy in the D'' layer.

  7. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).

  8. Earth Radiation Budget Experiment (ERBE) validation

    NASA Technical Reports Server (NTRS)

    Barkstrom, Bruce R.; Harrison, Edwin F.; Smith, G. Louis; Green, Richard N.; Kibler, James F.; Cess, Robert D.

    1990-01-01

    During the past 4 years, data from the Earth Radiation Budget Experiment (ERBE) have been undergoing detailed examination. There is no direct source of groundtruth for the radiation budget. Thus, this validation effort has had to rely heavily upon intercomparisons between different types of measurements. The ERBE SCIENCE Team chose 10 measures of agreement as validation criteria. Late in August 1988, the Team agreed that the data met these conditions. As a result, the final, monthly averaged data products are being archived. These products, their validation, and some results for January 1986 are described. Information is provided on obtaining the data from the archive.

  9. Initiation process of a thrust fault revealed by analog experiments

    NASA Astrophysics Data System (ADS)

    Yamada, Yasuhiro; Dotare, Tatsuya; Adam, Juergen; Hori, Takane; Sakaguchi, Hide

    2016-04-01

    We conducted 2D (cross-sectional) analog experiments with dry sand using a high resolution digital image correlation (DIC) technique to reveal initiation process of a thrust fault in detail, and identified a number of "weak shear bands" and minor uplift prior to the thrust initiation. The observations suggest that the process can be divided into three stages. Stage 1: characterized by a series of abrupt and short-lived weak shear bands at the location where the thrust will be generated later. Before initiation of the fault, the area to be the hanging wall starts to uplift. Stage 2: defined by the generation of the new thrust and its active displacement. The location of the new thrust seems to be constrained by its associated back-thrust, produced at the foot of the surface slope (by the previous thrust). The activity of the previous thrust turns to zero once the new thrust is generated, but the timing of these two events is not the same. Stage 3: characterized by a constant displacement along the (new) thrust. Similar minor shear bands can be seen in the toe area of the Nankai accretionary prism, SW Japan and we can correlate the along-strike variations in seismic profiles to the model results that show the characteristic features in each thrust development stage.

  10. Semi-brittle flow of granitoid fault rocks in experiments

    NASA Astrophysics Data System (ADS)

    Pec, Matej; Stünitz, Holger; Heilbronner, Renée.; Drury, Martyn

    2016-03-01

    Field studies and seismic data show that semi-brittle flow of fault rocks probably is the dominant deformation mechanism at the base of the seismogenic zone at the so-called frictional-viscous transition. To understand the physical and chemical processes accommodating semi-brittle flow, we have performed an experimental study on synthetic granitoid fault rocks exploring a broad parameter space (temperature, T = 300, 400, 500, and 600°C, confining pressure, Pc ≈ 300, 500, 1000, and 1500 MPa, shear strain rate, γṡ ≈ 10-3, 10-4, 10-5, and 10-6 s-1, to finite shear strains, γ = 0-5). The experiments have been carried out using a granular material with grain size smaller than 200 µm with a little H2O added (0.2 wt %). Only two experiments (performed at the fastest strain rates and lowest temperatures) have failed abruptly right after reaching peak strength (τ ~ 1400 MPa). All other samples reach high shear stresses (τ ~ 570-1600 MPa) then weaken slightly (by Δτ ~ 10-190 MPa) and continue to deform at a more or less steady state stress level. Clear temperature dependence and a weak strain rate dependence of the peak as well as steady state stress levels are observed. In order to express this relationship, the strain rate-stress sensitivity has been fit with a stress exponent, assuming γ˙ ∝ τn and yields high stress exponents (n ≈ 10-140), which decrease with increasing temperature. The microstructures show widespread comminution, strain partitioning, and localization into slip zones. The slip zones contain at first nanocrystalline and partly amorphous material. Later, during continued deformation, fully amorphous material develops in some of the slip zones. Despite the mechanical steady state conditions, the fabrics in the slip zones and outside continue to evolve and do not reach a steady state microstructure below γ = 5. Within the slip zones, the fault rock material progressively transforms from a crystalline solid to an amorphous material. We

  11. Initiation of a thrust fault revealed by analog experiments

    NASA Astrophysics Data System (ADS)

    Dotare, Tatsuya; Yamada, Yasuhiro; Adam, Juergen; Hori, Takane; Sakaguchi, Hide

    2016-08-01

    To reveal in detail the process of initiation of a thrust fault, we conducted analog experiments with dry quartz sand using a high-resolution digital image correlation technique to identify minor shear-strain patterns for every 27 μm of shortening (with an absolute displacement accuracy of 0.5 μm). The experimental results identified a number of "weak shear bands" and minor uplift prior to the initiation of a thrust in cross-section view. The observations suggest that the process is closely linked to the activity of an adjacent existing thrust, and can be divided into three stages. Stage 1 is characterized by a series of abrupt and short-lived weak shear bands at the location where the thrust will subsequently be generated. The area that will eventually be the hanging wall starts to uplift before the fault forms. The shear strain along the existing thrust decreases linearly during this stage. Stage 2 is defined by the generation of the new thrust and active displacements along it, identified by the shear strain along the thrust. The location of the new thrust may be constrained by its back-thrust, generally produced at the foot of the surface slope. The activity of the existing thrust falls to zero once the new thrust is generated, although these two events are not synchronous. Stage 3 of the thrust is characterized by a constant displacement that corresponds to the shortening applied to the model. Similar minor shear bands have been reported in the toe area of the Nankai accretionary prism, SW Japan. By comparing several transects across this subduction margin, we can classify the lateral variations in the structural geometry into the same stages of deformation identified in our experiments. Our findings may also be applied to the evaluation of fracture distributions in thrust belts during unconventional hydrocarbon exploration and production.

  12. Earth to Orbit Beamed Energy Experiment

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Montgomery, Edward E.

    2017-01-01

    As a means of primary propulsion, beamed energy propulsion offers the benefit of offloading much of the propulsion system mass from the vehicle, increasing its potential performance and freeing it from the constraints of the rocket equation. For interstellar missions, beamed energy propulsion is arguably the most viable in the near- to mid-term. A near-term demonstration showing the feasibility of beamed energy propulsion is necessary and, fortunately, feasible using existing technologies. Key enabling technologies are large area, low mass spacecraft and efficient and safe high power laser systems capable of long distance propagation. NASA is currently developing the spacecraft technology through the Near Earth Asteroid Scout solar sail mission and has signed agreements with the Planetary Society to study the feasibility of precursor laser propulsion experiments using their LightSail-2 solar sail spacecraft. The capabilities of Space Situational Awareness assets and the advanced analytical tools available for fine resolution orbit determination now make it possible to investigate the practicalities of an Earth-to-orbit Beamed Energy eXperiment (EBEX) - a demonstration at delivered power levels that only illuminate a spacecraft without causing damage to it. The degree to which this can be expected to produce a measurable change in the orbit of a low ballistic coefficient spacecraft is investigated. Key system characteristics and estimated performance are derived for a near term mission opportunity involving the LightSail-2 spacecraft and laser power levels modest in comparison to those proposed previously. While the technology demonstrated by such an experiment is not sufficient to enable an interstellar precursor mission, if approved, then it would be the next step toward that goal.

  13. Sorption of the Rare Earth Elements and Yttrium (REE-Y) in calcite: the mechanism of a new effective tool in identifying paleoearthquakes on carbonate faults

    NASA Astrophysics Data System (ADS)

    Moraetis, Daniel; Mouslopoulou, Vasiliki; Pratikakis, Alexandros

    2015-04-01

    A new tool for identifying paleoearthquakes on carbonate faults has been successfully tested on two carbonate faults in southern Europe (the Magnola Fault in Italy and the Spili Fault in Greece): the Rare Earth Element and Yttrium (REE-Y) method (Manighetti et al., 2010; Mouslopoulou et al., 2011). The method is based on the property of the calcite in limestone scarps to absorb the REE and Y from the soil during its residence beneath the ground surface (e.g. before its exhumation due to earthquakes). Although the method is established, the details of the enrichment mechanism are poorly investigated. Here we use published data together with new information from pot-experiments to shed light on the sorption mechanism and the time effectiveness of the REE-Y method. Data from the Magnola and Spili faults show that the average chemical enrichment is ~45%, in REE-Y while the denudation rate of the enriched zones is ~1% higher every 400 years due to exposure of the fault scarp in weathering. They also show that the chemical enrichment is significant even for short periods of residence time (e.g., ~100 years). To better understand the enrichment mechanism, we performed a series of pot experiments, where carbonate tiles extracted from the Spili Fault were buried into soil collected from the hanging-wall of the same fault. We irrigated the pots with artificial rain that equals 5 years of rainfall in Crete and at temperatures of 15oC and 25oC. Following, we performed sorption isotherm, kinetic and pH-edge tests for the europium (Eu), the cerium (Ce) and the ytterbium (Yt) that occur in the calcite minerals. The processes of adsorption and precipitation in the batch experiments are simulated by the Mineql software. The pot experiments indicate incorporation of the REE and Y into the surface of the carbonate tile which is in contact with the soil. The pH of the leached solution during the rain application range from 7.6 to 8.3. Nutrient release like Ca is higher in the leached

  14. A second generation experiment in fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The primary goal was to determine whether the application of fault tolerance to software increases its reliability if the cost of production is the same as for an equivalent nonfault tolerance version derived from the same requirements specification. Software development protocols are discussed. The feasibility of adapting to software design fault tolerance the technique of N-fold Modular Redundancy with majority voting was studied.

  15. Work Optimization Predicts Accretionary Faulting: An Integration of Physical and Numerical Experiments

    NASA Astrophysics Data System (ADS)

    McBeck, Jessica A.; Cooke, Michele L.; Herbert, Justin W.; Maillot, Bertrand; Souloumiac, Pauline

    2017-09-01

    We employ work optimization to predict the geometry of frontal thrusts at two stages of an evolving physical accretion experiment. Faults that produce the largest gains in efficiency, or change in external work per new fault area, ΔWext/ΔA, are considered most likely to develop. The predicted thrust geometry matches within 1 mm of the observed position and within a few degrees of the observed fault dip, for both the first forethrust and backthrust when the observed forethrust is active. The positions of the second backthrust and forethrust that produce >90% of the maximum ΔWext/ΔA also overlap the observed thrusts. The work optimal fault dips are within a few degrees of the fault dips that maximize the average Coulomb stress. Slip gradients along the detachment produce local elevated shear stresses and high strain energy density regions that promote thrust initiation near the detachment. The mechanical efficiency (Wext) of the system decreases at each of the two simulated stages of faulting and resembles the evolution of experimental force. The higher ΔWext/ΔA due to the development of the first pair relative to the second pair indicates that the development of new thrusts may lead to diminishing efficiency gains as the wedge evolves. The numerical estimates of work consumed by fault propagation overlap the range calculated from experimental force data and crustal faults. The integration of numerical and physical experiments provides a powerful approach that demonstrates the utility of work optimization to predict the development of faults.

  16. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  17. Patterns in Crew-Initiated Photography of Earth from ISS - Is Earth Observation a Salutogenic Experience?

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Slack, Kelley; Olson, V.; Trenchard, M.; Willis, K.; Baskin, P.

    2006-01-01

    This viewgraph presentation asks the question "Is the observation of earth from the ISS a positive (salutogenic) experience for crew members?"All images are distributed to the public via the "Gateway to Astronaut Photography of Earth at http://eol.jsc.nasa.gov. The objectives of the study are (1) Mine the dataset of Earth Observation photography--What can it tell us about the importance of viewing the Earth as a positive experience for the crewmembers? (2) Quantify extent to which photography was self-initiated (not requested by scientists) (3) Identify patterns photography activities versus scientific requested photography.

  18. Nonlinear waves in earth crust faults: application to regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Gershenzon, Naum; Bambakidis, Gust

    2015-04-01

    The genesis, development and cessation of regular earthquakes continue to be major problems of modern geophysics. How are earthquakes initiated? What factors determine the rapture velocity, slip velocity, rise time and geometry of rupture? How do accumulated stresses relax after the main shock? These and other questions still need to be answered. In addition, slow slip events have attracted much attention as an additional source for monitoring fault dynamics. Recently discovered phenomena such as deep non-volcanic tremor (NVT), low frequency earthquakes (LFE), very low frequency earthquakes (VLF), and episodic tremor and slip (ETS) have enhanced and complemented our knowledge of fault dynamic. At the same time, these phenomena give rise to new questions about their genesis, properties and relation to regular earthquakes. We have developed a model of macroscopic dry friction which efficiently describes laboratory frictional experiments [1], basic properties of regular earthquakes including post-seismic stress relaxation [3], the occurrence of ambient and triggered NVT [4], and ETS events [5, 6]. Here we will discuss the basics of the model and its geophysical applications. References [1] Gershenzon N.I. & G. Bambakidis (2013) Tribology International, 61, 11-18, http://dx.doi.org/10.1016/j.triboint.2012.11.025 [2] Gershenzon, N.I., G. Bambakidis and T. Skinner (2014) Lubricants 2014, 2, 1-x manuscripts; doi:10.3390/lubricants20x000x; arXiv:1411.1030v2 [3] Gershenzon N.I., Bykov V. G. and Bambakidis G., (2009) Physical Review E 79, 056601 [4] Gershenzon, N. I, G. Bambakidis, (2014a), Bull. Seismol. Soc. Am., 104, 4, doi: 10.1785/0120130234 [5] Gershenzon, N. I.,G. Bambakidis, E. Hauser, A. Ghosh, and K. C. Creager (2011), Geophys. Res. Lett., 38, L01309, doi:10.1029/2010GL045225. [6] Gershenzon, N.I. and G. Bambakidis (2014) Bull. Seismol. Soc. Am., (in press); arXiv:1411.1020

  19. Skylab experiments. Volume 2: Remote sensing of earth resources

    NASA Technical Reports Server (NTRS)

    1973-01-01

    This volume covers the broad area of earth resources in which Skylab experiments will be performed. A brief description of the Skylab program, its objectives, and vehicles is included. Section 1 introduces the concept and historical significance of remote sensing, and discusses the major scientific considerations involved in remotely sensing the earth's resources. Sections 2 through 6 provide a description of the individual earth resource sensors and experiments to be performed. Each description includes a discussion of the experiment background and scientific objectives, the equipment involved, and a discussion of significant experiment performance areas.

  20. Frictional melting experiments investigate coseismic behaviour of pseudotachylyte-bearing faults in the Outer Hebrides Fault Zone, UK.

    NASA Astrophysics Data System (ADS)

    Campbell, L.; De Paola, N.; Nielsen, S. B.; Holdsworth, R.; Lloyd, G. E. E.; Phillips, R. J.; Walcott, R.

    2015-12-01

    Recent experimental studies, performed at seismic slip rates (≥ 1 m/s), suggest that the friction coefficient of seismic faults is significantly lower than at sub-seismic (< 1 mm/s) speeds. Microstructural observations, integrated with theoretical studies, suggest that the weakening of seismic faults could be due to a range of thermally-activated mechanisms (e.g. gel, nanopowder and melt lubrication, thermal pressurization, viscous flow), triggered by frictional heating in the slip zone. The presence of pseudotachylyte within both exhumed fault zones and experimental slip zones in crystalline rocks suggests that lubrication plays a key role in controlling dynamic weakening during rupture propagation. The Outer Hebrides Fault Zone (OHFZ), UK contains abundant pseudotachylyte along faults cutting varying gneissic lithologies. Our field observations suggest that the mineralogy of the protolith determines volume, composition and viscosity of the frictional melt, which then affects the coseismic weakening behaviour of the fault and has important implications for the magnitudes and distribution of stress drops during slip episodes. High velocity friction experiments at 18 MPa axial load, 1.3 ms-1 and up to 10 m slip were run on quartzo-feldspathic, metabasic and mylonitic samples, taken from the OHFZ in an attempt to replicate its coseismic frictional behaviour. These were configured in cores of a single lithology, or in mixed cores with two rock types juxtaposed. All lithologies produce a general trend of frictional evolution, where an initial peak followed by transient weakening precedes a second peak which then decays to a steady state. Metabasic and felsic single-lithology samples both produce sharper frictional peaks, at values of μ = 0.19 and μ= 0.37 respectively, than the broader and smaller (μ= 0.15) peak produced by a mixed basic-felsic sample. In addition, both single-lithology peaks occur within 0.2 m slip, whereas the combined-lithology sample displays a

  1. Celebrating the Earth: Stories, Experiences, and Activities.

    ERIC Educational Resources Information Center

    Livo, Norma J.

    Young learners are invited to learn about the natural world through engaging activities that encourage the observation, exploration, and appreciation of nature. Weaving together a stimulating tapestry of folktales, personal narratives, and hands-on activities, this book teaches children about the earth and all of its creatures--birds, plants,…

  2. Apollo experience report: Earth landing system

    NASA Technical Reports Server (NTRS)

    West, R. B.

    1973-01-01

    A brief discussion of the development of the Apollo earth landing system and a functional description of the system are presented in this report. The more significant problems that were encountered during the program, the solutions, and, in general, the knowledge that was gained are discussed in detail. Two appendixes presenting a detailed description of the various system components and a summary of the development and the qualification test programs are included.

  3. Skylab earth resources experiment package /EREP/ - Sea surface topography experiment

    NASA Technical Reports Server (NTRS)

    Vonbun, F. O.; Marsh, J. G.; Mcgoogan, J. T.; Leitao, C. D.; Vincent, S.; Wells, W. T.

    1976-01-01

    The S-193 Skylab radar altimeter was operated in a round-the-world pass on Jan. 31, 1974. The main purpose of this experiment was to test and 'measure' the variation of the sea surface topography using the Goddard Space Flight Center (GSFC) geoid model as a reference. This model is based upon 430,000 satellite and 25,000 ground gravity observations. Variations of the sea surface on the order of -40 to +60 m were observed along this pass. The 'computed' and 'measured' sea surfaces have an rms agreement on the order of 7 m. This is quite satisfactory, considering that this was the first time the sea surface has been observed directly over a distance of nearly 35,000 km and compared to a computed model. The Skylab orbit for this global pass was computed using the Goddard Earth Model (GEM 6) and S-band radar tracking data, resulting in an orbital height uncertainty of better than 5 m over one orbital period.

  4. Paper 58714 - Exploring activated faults hydromechanical processes from semi-controled field injection experiments

    NASA Astrophysics Data System (ADS)

    Guglielmi, Y.; Cappa, F.; Nussbaum, C.

    2015-12-01

    The appreciation of the sensitivity of fractures and fault zones to fluid-induced-deformations in the subsurface is a key question in predicting the reservoir/caprock system integrity around fluid manipulations with applications to reservoir leakage and induced seismicity. It is also a question of interest in understanding earthquakes source, and recently the hydraulic behavior of clay faults under a potential reactivation around nuclear underground depository sites. Fault and fractures dynamics studies face two key problems (1) the up-scaling of laboratory determined properties and constitutive laws to the reservoir scale which is not straightforward when considering faults and fractures heterogeneities, (2) the difficulties to control both the induced seismicity and the stimulated zone geometry when a fault is reactivated. Using instruments dedicated to measuring coupled pore pressures and deformations downhole, we conducted field academic experiments to characterize fractures and fault zones hydromechanical properties as a function of their multi-scale architecture, and to monitor their dynamic behavior during the earthquake nucleation process. We show experiments on reservoir or cover rocks analogues in underground research laboratories where experimental conditions can be optimized. Key result of these experiments is to highlight how important the aseismic fault activation is compared to the induced seismicity. We show that about 80% of the fault kinematic moment is aseismic and discuss the complex associated fault friction coefficient variations. We identify that the slip stability and the slip velocity are mainly controlled by the rate of the permeability/porosity increase, and discuss the conditions for slip nucleation leading to seismic instability.

  5. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  6. The Earth is flat when personally significant experiences with the sphericity of the Earth are absent.

    PubMed

    Carbon, Claus-Christian

    2010-07-01

    Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A simulation was run that calculated respective 3D configurations of the city positions for a wide range of radii of the proposed sphere. People who had personally experienced the Earth as a sphere, at least once in their lifetime, showed a clear optimal solution of the multidimensional scaling (MDS) routine with a mean radius deviating only 8% from the actual radius of the Earth. In contrast, the calculated configurations for people without any personal experience with the Earth as a sphere were compatible with a cognitive concept of a flat Earth. 2010 Elsevier B.V. All rights reserved.

  7. Magneto-Optical Experiments on Rare Earth Garnet Films.

    ERIC Educational Resources Information Center

    Tanner, B. K.

    1980-01-01

    Describes experiments in which inexpensive or standard laboratory equipment is used to measure several macroscopic magnetic properties of thin rare earth garnet films used in the manufacture of magnetic bubble devices. (Author/CS)

  8. Lessons Learned in the Livingstone 2 on Earth Observing One Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Shulman, Seth

    2005-01-01

    The Livingstone 2 (L2) model-based diagnosis software is a reusable diagnostic tool for monitoring complex systems. In 2004, L2 was integrated with the JPL Autonomous Sciencecraft Experiment (ASE) and deployed on-board Goddard's Earth Observing One (EO-1) remote sensing satellite, to monitor and diagnose the EO-1 space science instruments and imaging sequence. This paper reports on lessons learned from this flight experiment. The goals for this experiment, including validation of minimum success criteria and of a series of diagnostic scenarios, have all been successfully net. Long-term operations in space are on-going, as a test of the maturity of the system, with L2 performance remaining flawless. L2 has demonstrated the ability to track the state of the system during nominal operations, detect simulated abnormalities in operations and isolate failures to their root cause fault. Specific advances demonstrated include diagnosis of ambiguity groups rather than a single fault candidate; hypothesis revision given new sensor evidence about the state of the system; and the capability to check for faults in a dynamic system without having to wait until the system is quiescent. The major benefits of this advanced health management technology are to increase mission duration and reliability through intelligent fault protection, and robust autonomous operations with reduced dependency on supervisory operations from Earth. The work-load for operators will be reduced by telemetry of processed state-of-health information rather than raw data. The long-term vision is that of making diagnosis available to the onboard planner or executive, allowing autonomy software to re-plan in order to work around known component failures. For a system that is expected to evolve substantially over its lifetime, as for the International Space Station, the model-based approach has definite advantages over rule-based expert systems and limit-checking fault protection systems, as these do not

  9. Pore Fluid Pressure Development in Compacting Fault Gouge in Theory, Experiments, and Nature

    NASA Astrophysics Data System (ADS)

    Faulkner, D. R.; Sanchez-Roa, C.; Boulton, C.; den Hartog, S. A. M.

    2018-01-01

    The strength of fault zones is strongly dependent on pore fluid pressures within them. Moreover, transient changes in pore fluid pressure can lead to a variety of slip behavior from creep to unstable slip manifested as earthquakes or slow slip events. The frictional properties of low-permeability fault gouge in nature and experiment can be affected by pore fluid pressure development through compaction within the gouge layer, even when the boundaries are drained. Here the conditions under which significant pore fluid pressures develop are analyzed analytically, numerically, and experimentally. Friction experiments on low-permeability fault gouge at different sliding velocities show progressive weakening as slip rate is increased, indicating that faster experiments are incapable of draining the pore fluid pressure produced by compaction. Experiments are used to constrain the evolution of the permeability and pore volume needed for numerical modeling of pore fluid pressure build up. The numerical results are in good agreement with the experiments, indicating that the principal physical processes have been considered. The model is used to analyze the effect of pore fluid pressure transients on the determination of the frictional properties, illustrating that intrinsic velocity-strengthening behavior can appear velocity weakening if pore fluid pressure is not given sufficient time to equilibrate. The results illustrate that care must be taken when measuring experimentally the frictional characteristics of low-permeability fault gouge. The contribution of compaction-induced pore fluid pressurization leading to weakening of natural faults is considered. Cyclic pressurization of pore fluid within fault gouge during successive earthquakes on larger faults may reset porosity and hence the capacity for compaction weakening.

  10. Minimizing student’s faults in determining the design of experiment through inquiry-based learning

    NASA Astrophysics Data System (ADS)

    Nilakusmawati, D. P. E.; Susilawati, M.

    2017-10-01

    The purpose of this study were to describe the used of inquiry method in an effort to minimize student’s fault in designing an experiment and to determine the effectiveness of the implementation of the inquiry method in minimizing student’s faults in designing experiments on subjects experimental design. This type of research is action research participants, with a model of action research design. The data source were students of the fifth semester who took a subject of experimental design at Mathematics Department, Faculty of Mathematics and Natural Sciences, Udayana University. Data was collected through tests, interviews, and observations. The hypothesis was tested by t-test. The result showed that the implementation of inquiry methods to minimize of students fault in designing experiments, analyzing experimental data, and interpret them in cycle 1 students can reduce fault by an average of 10.5%. While implementation in Cycle 2, students managed to reduce fault by an average of 8.78%. Based on t-test results can be concluded that the inquiry method effectively used to minimize of student’s fault in designing experiments, analyzing experimental data, and interpreting them. The nature of the teaching materials on subject of Experimental Design that demand the ability of students to think in a systematic, logical, and critical in analyzing the data and interpret the test cases makes the implementation of this inquiry become the proper method. In addition, utilization learning tool, in this case the teaching materials and the students worksheet is one of the factors that makes this inquiry method effectively minimizes of student’s fault when designing experiments.

  11. Observations of fault zone heterogeneity effects on stress alteration and slip nucleation during a fault reactivation experiment in the Mont Terri rock laboratory, Switzerland

    NASA Astrophysics Data System (ADS)

    Nussbaum, C.; Guglielmi, Y.

    2016-12-01

    The FS experiment at the Mont Terri underground research laboratory consists of a series of controlled field stimulation tests conducted in a fault zone intersecting a shale formation. The Main Fault is a secondary order reverse fault that formed during the creation of the Jura fold-and-thrust belt, associated to a large décollement. The fault zone is up to 6 m wide, with micron-thick shear zones, calcite veins, scaly clay and clay gouge. We conducted fluid injection tests in 4 packed-off borehole intervals across the Main Fault using mHPP probes that allow to monitor 3D displacement between two points anchored to the borehole walls at the same time as fluid pressure and flow rate. While pressurizing the intervals above injection pressures of 3.9 to 5.3 MPa, there is an irreversible change in the displacements magnitude and orientation associated to the hydraulic opening of natural shear planes oriented N59 to N69 and dipping 39 to 58°. Displacements of 0.01 mm to larger than 0.1 mm were captured, the highest value being observed at the interface between the low permeable fault core and the damage zone. Contrasted fault movements were observed, mainly dilatant in the fault core, highly dilatant-normal slip at the fault core-damage zone interface and low dilatant-strike-slip-reverse in the damage-to-intact zones. First using a slip-tendency approach based on Coulomb reactivation potential of fault planes, we computed a stress tensor orientation for each test. The input parameters are the measured displacement vectors above the hydraulic opening pressure and the detailed fault geometry of each intervals. All measurements from the damage zone can be explained by a stress tensor in strike-slip regime. Fault movements measured at the core-damage zone interface and within the fault core are in agreement with the same stress orientations but changed as normal faulting, explaining the significant dilatant movements. We then conducted dynamic hydromechanical simulations

  12. Real-time antenna fault diagnosis experiments at DSS 13

    NASA Technical Reports Server (NTRS)

    Mellstrom, J.; Pierson, C.; Smyth, P.

    1992-01-01

    Experimental results obtained when a previously described fault diagnosis system was run online in real time at the 34-m beam waveguide antenna at Deep Space Station (DSS) 13 are described. Experimental conditions and the quality of results are described. A neural network model and a maximum-likelihood Gaussian classifier are compared with and without a Markov component to model temporal context. At the rate of a state update every 6.4 seconds, over a period of roughly 1 hour, the neural-Markov system had zero errors (incorrect state estimates) while monitoring both faulty and normal operations. The overall results indicate that the neural-Markov combination is the most accurate model and has significant practical potential.

  13. Earth Radiation Budget Experiment (ERBE) scanner instrument anomaly investigation

    NASA Technical Reports Server (NTRS)

    Watson, N. D.; Miller, J. B.; Taylor, L. V.; Lovell, J. B.; Cox, J. W.; Fedors, J. C.; Kopia, L. P.; Holloway, R. M.; Bradley, O. H.

    1985-01-01

    The results of an ad-hoc committee investigation of in-Earth orbit operational anomalies noted on two identical Earth Radiation Budget Experiment (ERBE) Scanner instruments on two different spacecraft busses is presented. The anomalies are attributed to the bearings and the lubrication scheme for the bearings. A detailed discussion of the pertinent instrument operations, the approach of the investigation team and the current status of the instruments now in Earth orbit is included. The team considered operational changes for these instruments, rework possibilities for the one instrument which is waiting to be launched, and preferable lubrication considerations for specific space operational requirements similar to those for the ERBE scanner bearings.

  14. Earth Sciences Requirements for the Information Sciences Experiment System

    NASA Technical Reports Server (NTRS)

    Bowker, David E. (Editor); Katzberg, Steve J. (Editor); Wilson, R. Gale (Editor)

    1990-01-01

    The purpose of the workshop was to further explore and define the earth sciences requirements for the Information Sciences Experiment System (ISES), a proposed onboard data processor with real-time communications capability intended to support the Earth Observing System (Eos). A review of representative Eos instrument types is given and a preliminary set of real-time data needs has been established. An executive summary is included.

  15. The Capacity for Compaction Weakening in Fault Gouge in Nature and Experiment

    NASA Astrophysics Data System (ADS)

    Faulkner, D.; Boulton, C. J.; Sanchez Roa, C.; Den Hartog, S. A. M.; Bedford, J. D.

    2017-12-01

    As faults form in low permeability rocks, the compaction of fault gouge can lead to significant pore-fluid pressure increases. The pore pressure increase results from the collapse of the porosity through shear-enhanced compaction and the low hydraulic diffusivity of the gouge that inhibits fluid flow. In experiments, the frictional properties of clay-bearing fault gouges are significantly affected by the development of locally high pore-fluid pressures when compaction rates are high due to fast displacement rates or slip in underconsolidated materials. We show how the coefficient of friction of fault gouges sheared at different slip velocities can be explained with a numerical model that is constrained by laboratory measurements of contemporaneous changes in permeability and porosity. In nature, for compaction weakening to play an important role in earthquake nucleation (and rupture propagation), a mechanism is required to reset the porosity, i.e., maintain underconsolidated gouge along the fault plane. We use the observations of structures within the principal slip zone of the Alpine Fault in New Zealand to suggest that cyclic fluidization of the gouge occurs during coseismic slip, thereby resetting the gouge porosity prior to the next seismic event. Results from confined laboratory rotary shear measurements at elevated slip rates appear to support the hypothesis that fluidization leads to underconsolidation and, thus, to potential weakening by shear-enhanced compaction-induced pore-fluid pressurization.

  16. Thrust-wrench fault interference in a brittle medium: new insights from analogue modelling experiments

    NASA Astrophysics Data System (ADS)

    Rosas, Filipe; Duarte, Joao; Schellart, Wouter; Tomas, Ricardo; Grigorova, Vili; Terrinha, Pedro

    2015-04-01

    We present analogue modelling experimental results concerning thrust-wrench fault interference in a brittle medium, to try to evaluate the influence exerted by different prescribed interference angles in the formation of morpho-structural interference fault patterns. All the experiments were conceived to simulate simultaneous reactivation of confining strike-slip and thrust faults defining a (corner) zone of interference, contrasting with previously reported discrete (time and space) superposition of alternating thrust and strike-slip events. Different interference angles of 60°, 90° and 120° were experimentally investigated by comparing the specific structural configurations obtained in each case. Results show that a deltoid-shaped morpho-structural pattern is consistently formed in the fault interference (corner) zone, exhibiting a specific geometry that is fundamentally determined by the different prescribed fault interference angle. Such angle determines the orientation of the displacement vector shear component along the main frontal thrust direction, determining different fault confinement conditions in each case, and imposing a complying geometry and kinematics of the interference deltoid structure. Model comparison with natural examples worldwide shows good geometric and kinematic similarity, pointing to the existence of matching underlying dynamic process. Acknowledgments This work was sponsored by the Fundação para a Ciência e a Tecnologia (FCT) through project MODELINK EXPL/GEO-GEO/0714/2013.

  17. An experience of science theatre: Earth Science for children

    NASA Astrophysics Data System (ADS)

    Musacchio, Gemma; Lanza, Tiziana; D'Addezio, Giuliana

    2015-04-01

    The present paper describes an experience of science theatre addressed to children of primary and secondary school, with the main purpose of explaining the Earth interior while raising awareness about natural hazard. We conducted the experience with the help of a theatrical company specialized in shows for children. Several performances have been reiterated in different context, giving us the opportunity of conducting a preliminary survey with public of different ages, even if the show was conceived for children. Results suggest that science theatre while relying on creativity and emotional learning in transmitting knowledge about the Earth and its hazard has the potential to induce in children a positive attitude towards the risks

  18. Plate-rate laboratory friction experiments reveal potential slip instability on weak faults

    NASA Astrophysics Data System (ADS)

    Ikari, M.; Kopf, A.

    2016-12-01

    In earthquake science, it is commonly assumed that earthquakes nucleate on strong patches or "asperities", and data from laboratory friction experiments indicate a tendency for unstable slip (exhibited as velocity-weakening frictional behavior) in strong geologic materials. However, an overwhelming amount of these experiments were conducted at driving velocities ranging from 0.1 µm/s to over 1 m/s. Less data exists for shearing experiments driven at slow velocities on the order of cm/yr (nm/s), approximating plate tectonic rates which represent the natural driving condition on plate boundary faults. Recent laboratory work using samples recovered from the Tohoku region at the Japan Trench, within the high coseismic slip region of the 2011 M9 Tohoku earthquake, showed that the fault is extremely weak with a friction coefficient < 0.2. At sliding velocities of at least 0.1 µm/s mostly velocity-strengthening friction is observed, which is favorable for stable creep, consistent with earlier work. However, shearing at an imposed rate of 8.5 cm/yr produced both velocity-weakening friction and discrete slow slip events, which are likely instances of frictional instabilities or quasi-instabilities. Here, we expand on the Tohoku experiment by conducting cm/yr friction experiments on natural gouges obtained from a variety of other major fault zones obtained by scientific drilling; these include the San Andreas Fault, Costa Rica subduction zone, Nankai Trough (Japan), Barbados subduction zone, Alpine Fault (New Zealand), southern Cascadia, and Woodlark Basin (Papua New Guinea). We focus here on weak fault materials having a friction coefficient of < 0.5. At conventional laboratory driving rates of 0.1-30 µm/s, velocity strengthening is common. However, at cm/yr driving rates we commonly observe velocity-weakening friction and slow slip events, with most samples exhibit both behaviors. These results demonstrate when fault samples are sheared at plate tectonic rates in the

  19. The Earth Is Flat when Personally Significant Experiences with the Sphericity of the Earth Are Absent

    ERIC Educational Resources Information Center

    Carbon, Claus-Christian

    2010-01-01

    Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A…

  20. Skylab Experiments, Volume 2, Remote Sensing of Earth Resources.

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Washington, DC.

    Up-to-date knowledge about Skylab experiments is presented for the purpose of informing high school teachers about scientific research performed in orbit and enabling them to broaden their scope of material selection. The second volume emphasizes the sensing of earth resources. The content includes an introduction to the concept and historical…

  1. NASA's Earth Science Data Systems Standards Process Experiences

    NASA Technical Reports Server (NTRS)

    Ullman, Richard E.; Enloe, Yonsook

    2007-01-01

    NASA has impaneled several internal working groups to provide recommendations to NASA management on ways to evolve and improve Earth Science Data Systems. One of these working groups is the Standards Process Group (SPC). The SPG is drawn from NASA-funded Earth Science Data Systems stakeholders, and it directs a process of community review and evaluation of proposed NASA standards. The working group's goal is to promote interoperability and interuse of NASA Earth Science data through broader use of standards that have proven implementation and operational benefit to NASA Earth science by facilitating the NASA management endorsement of proposed standards. The SPC now has two years of experience with this approach to identification of standards. We will discuss real examples of the different types of candidate standards that have been proposed to NASA's Standards Process Group such as OPeNDAP's Data Access Protocol, the Hierarchical Data Format, and Open Geospatial Consortium's Web Map Server. Each of the three types of proposals requires a different sort of criteria for understanding the broad concepts of "proven implementation" and "operational benefit" in the context of NASA Earth Science data systems. We will discuss how our Standards Process has evolved with our experiences with the three candidate standards.

  2. Fault tolerant features and experiments of ANTS distributed real-time system

    NASA Astrophysics Data System (ADS)

    Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.

    1995-01-01

    The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.

  3. Performance of the cometary experiment MUPUS on the body Earth

    NASA Astrophysics Data System (ADS)

    Marczewski, W.; Usowicz, B.; Schröer, K.; Seiferlin, K.; Spohn, T.

    2003-04-01

    Thermal experiment MUPUS for the Rosetta mission was extensively experience in field and laboratory conditions to predict its performance under physical processes available on the Earth. The goal was not guessing a cometary material in the ground but available behavior of thermal sensor responses monitoring mass and energy transfer. The processes expected on a comet are different in composition and environmental from those met on the Earth but basically similar in physics. Nature of energy powering the processes is also essentially the same - solar radiation. Several simple laboratory experiments with freezing and thawing with water ice, with mixture of water and oil and water layers strongly diverged by salinity revealed capability of recognition layered structure of the medium under test. More over effects of slow convection and latent heat related to the layers are also observed well. Cometary environment without atmosphere makes process of sublimation dominant. Open air conditions on the Earth may also offer a change of state in matter but between different phases. Learning temperature gradient in snow layers under thawing show that effects stimulated by a cause of daily cycling may be detected thermally. Results from investigations in snow made on Spitzbergen are good proofs on capability of the method. Relevance of thermal effects to heat powered processes of mass transport in the matter of ground is meaningful for the cometary experiment of MUPUS and for Earth sciences much concerned on water, gas and solid matter transport in the terrestrial ground. Results leading to energy balance studied on the Earth surface may be interesting also for the experiment on the comet and are to be discussed.

  4. Constraints on Lithospheric Rheology From Fault Displacement Rate Histories and Numerical Experiments

    NASA Astrophysics Data System (ADS)

    Lavier, L. L.; Bennett, R. A.; Anderson, M. L.; Matti, J. C.

    2005-05-01

    Recent displacement rate and geodetic data on the San Andreas, San Jacinto and eastern California shear zone suggest that changes in the geometry and/or the magnitude of the applied forces on the crust (e.g., a general or local change in fault strike relative to plate motion) can generate strain repartitioning within the crust on time scales of millions to thousands of years. The rates over which this repartitioning takes place in response to changing forces are controlled by the rheological evolution of the lithosphere. We investigate the implications of observed fault displacement histories for the rheology of the lithosphere using 2.5 D numerical experiments of deformation in an analogue system. The numerical technique used allows for the spontaneous formation of elastoplastic shear zones and flow in a Maxwell viscoelastic lower crust. The results show that when a strike slip fault is rotated to strike obliquely to the direction of relative plate motion it causes changes in bending and frictional stresses due to the formation of topography. To accommodate these changes, a conjugate system of oblique-striking strike slip faults develops. The total displacement is then slowly distributed over the new fault system on the time scale of mountain building (i.e. million of years). The rate of change is dependent on the strength of the lithosphere as well as the amount of obliquity applied on the initial strike-slip fault. In other numerical experiments we show that in a system of multiple strike-slip fault zones, displacement rate changes can occur over a time scale of about 100 kyr. This time scale corresponds to the Maxwell time at the brittle ductile transition (BDT). In such a system the lithospheric displacement is alternatively distributed (over 100 kyr) in clusters localized in lower crustal channels and over strike-slip fault zones. We show that the clustering time scale is controlled by the ratio of upper to lower crustal strength. This incomplete exercise

  5. Grid systems for Earth radiation budget experiment applications

    NASA Technical Reports Server (NTRS)

    Brooks, D. R.

    1981-01-01

    Spatial coordinate transformations are developed for several global grid systems of interest to the Earth Radiation Budget Experiment. The grid boxes are defined in terms of a regional identifier and longitude-latitude indexes. The transformations associate longitude with a particular grid box. The reverse transformations identify the center location of a given grid box. Transformations are given to relate the rotating (Earth-based) grid systems to solar position expressed in an inertial (nonrotating) coordinate system. The FORTRAN implementations of the transformations are given, along with sample input and output.

  6. Preliminary Results from the North Anatolian Fault Passive Seismic Experiment: Seismicity and Anisotropy

    NASA Astrophysics Data System (ADS)

    Biryol, C. B.; Ozacar, A.; Beck, S. L.; Zandt, G.

    2006-12-01

    The North Anatolian Fault (NAF) is one of the world's largest continental strike-slip faults. Despite much geological work at the surface, the deep structure of the NAF is relatively unknown. The North Anatolian Fault Passive Seismic Experiment is mainly focused on the lithospheric structure of this newly coalescing continental transform plate boundary. In the summer of 2005, we deployed 5 broadband seismic stations near the fault to gain more insight on the background seismicity, and in June 2006 we deployed 34 additional broadband stations along multiple transects crossing the main strand of the NAF and its splays. In the region, local seismicity is not limited to a narrow band near the NAF but distributed widely suggesting widespread continental deformation especially in the southern block. We relocated two of the largest events (M>4) that occurred close to our stations. Both events are 40-50km south of the NAF in the upper crust (6-9 km) along a normal fault with a strike-slip component that previously ruptured during the June 6, 2000 Orta-Cankiri earthquake (M=6.0). Preliminary analysis of SKS splitting for 4 stations deployed in 2005 indicates seismic anisotropy with delay times exceeding 1 sec. The fast polarization directions for these stations are primarily in NE-SW orientation, which remains uniform across the NAF. This direction is at a high angle to the surface trace of the fault and crustal velocity field, suggesting decoupling of lithosphere and mantle flow. Our SKS splitting observations are also similar to that observed from GSN station ANTO in central Turkey and stations across the Anatolian Plateau in eastern Turkey indicating relatively uniform mantle anisotropy throughout the region.

  7. Fault-free behavior of reliable multiprocessor systems: FTMP experiments in AIRLAB

    NASA Technical Reports Server (NTRS)

    Clune, E.; Segall, Z.; Siewiorek, D.

    1985-01-01

    This report describes a set of experiments which were implemented on the Fault tolerant Multi-Processor (FTMP) at NASA/Langley's AIRLAB facility. These experiments are part of an effort to formulate and evaluate validation methodologies for fault-tolerant computers. This report deals with the measurement of single parameters (baselines) of a fault free system. The initial set of baseline experiments lead to the following conclusions: (1) The system clock is constant and independent of workload in the tested cases; (2) the instruction execution times are constant; (3) the R4 frame size is 40mS with some variation; (4) the frame stretching mechanism has some flaws in its implementation that allow the possibility of an infinite stretching of frame duration. Future experiments are planned. Some will broaden the results of these initial experiments. Others will measure the system more dynamically. The implementation of a synthetic workload generation mechanism for FTMP is planned to enhance the experimental environment of the system.

  8. Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1

    DTIC Science & Technology

    2016-03-01

    REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The

  9. Geophysical character of the intraplate Wabash Fault System from the Wabash EarthScope FlexArray

    NASA Astrophysics Data System (ADS)

    Conder, J. A.; Zhu, L.; Wood, J. D.

    2017-12-01

    The Wabash Seismic Array was an EarthScope funded FlexArray deployment across the Wabash Fault System. The Wabash system is long known for oil and gas production. The fault system is often characterized as an intraplate seismic zone as it has produced several earthquakes above M4 in the last 50 years and potentially several above M7 in the Holocene. While earthquakes are far less numerous in the Wabash system than in the nearby New Madrid seismic zone, the seismic moment is nearly twice that of New Madrid over the past 50 years. The array consisted of 45 broadband instruments deployed across the axis to study the larger structure and 3 smaller phased arrays of 9 short-period instruments each to get a better sense of the local seismic output of smaller events. First results from the northern phased array indicate that seismicity in the Wabash behaves markedly differently than in New Madrid, with a low b-value around 0.7. Receiver functions show a 50 km thick crust beneath the system, thickening somewhat to the west. A variable-depth, positive-amplitude conversion in the deep crust gives evidence for a rift pillow at the base of the system within a dense lowermost crustal layer. Low Vs and a moderate negative amplitude conversion in the mid crust suggest a possible weak zone that could localize deformation. Shear wave splitting shows fast directions consistent with absolute plate motion across the system. Split times drop in magnitude to 0.5-0.7 seconds within the valley while in the 1-1.5 second range outside the valley. This magnitude decrease suggests a change in mantle signature beneath the fault system, possibly resulting from a small degree of local flow in the asthenosphere either along axis (as may occur with a thinned lithosphere) or by vertical flow (e.g., from delamination or dripping). We are building a 2D tomographic model across the region, relying primarily on teleseismic body waves. The tomography will undoubtedly show variations in crustal structure

  10. Ring-fault activity at subsiding calderas studied from analogue experiments and numerical modeling

    NASA Astrophysics Data System (ADS)

    Liu, Y. K.; Ruch, J.; Vasyura-Bathke, H.; Jonsson, S.

    2017-12-01

    Several subsiding calderas, such as the ones in the Galápagos archipelago and the Axial seamount in the Pacific Ocean have shown a complex but similar ground deformation pattern, composed of a broad deflation signal affecting the entire volcanic edifice and of a localized subsidence signal focused within the caldera. However, it is still debated how deep processes at subsiding calderas, including magmatic pressure changes, source locations and ring-faulting, relate to this observed surface deformation pattern. We combine analogue sandbox experiments with numerical modeling to study processes involved from initial subsidence to later collapse of calderas. The sandbox apparatus is composed of a motor driven subsiding half-piston connected to the bottom of a glass box. During the experiments the observation is done by five digital cameras photographing from various perspectives. We use Photoscan, a photogrammetry software and PIVLab, a time-resolved digital image correlation tool, to retrieve time-series of digital elevation models and velocity fields from acquired photographs. This setup allows tracking the processes acting both at depth and at the surface, and to assess their relative importance as the subsidence evolves to a collapse. We also use the Boundary Element Method to build a numerical model of the experiment setup, which comprises contracting sill-like source in interaction with a ring-fault in elastic half-space. We then compare our results from these two approaches with the examples observed in nature. Our preliminary experimental and numerical results show that at the initial stage of magmatic withdrawal, when the ring-fault is not yet well formed, broad and smooth deflation dominates at the surface. As the withdrawal increases, narrower subsidence bowl develops accompanied by the upward propagation of the ring-faulting. This indicates that the broad deflation, affecting the entire volcano edifice, is primarily driven by the contraction of the

  11. Contemporaneous ring fault activity and surface deformation at subsiding calderas studied using analogue experiments

    NASA Astrophysics Data System (ADS)

    Liu, Yuan-Kai; Ruch, Joël; Vasyura-Bathke, Hannes; Jónsson, Sigurjón

    2017-04-01

    Ground deformation analyses of several subsiding calderas have shown complex and overlapping deformation signals, with a broad deflation signal that affects the entire volcanic edifice and localized subsidence focused within the caldera. However, the relation between deep processes at subsiding calderas, including magmatic sources and faulting, and the observed surface deformation is still debated. Several recent examples of subsiding calderas in the Galápagos archipelago and at the Axial seamount in the Pacific Ocean indicate that ring fault activity plays an important role not only during caldera collapse, but also during initial stages of caldera subsidence. Nevertheless, ring fault activity has rarely been integrated into numerical models of subsiding calderas. Here we report on sandbox analogue experiments that we use to study the processes involved from an initial subsidence to a later collapse of calderas. The apparatus is composed of a subsiding half piston section connected to the bottom of a glass box and driven by a motor to control its subsidence. We analyze at the same time during the subsidence the 3D displacement at the model surface with a laser scanner and the 2D ring fault evolution on the side of the model (cross-section) with a side-view digital camera. We further use PIVLab, a time-resolved digital image correlation software tool, to extract strain and velocity fields at both the surface and in cross-section. This setup allows to track processes acting at depth and assess their relative importance as the collapse evolves. We further compare our results with the examples observed in nature as well as with numerical models that integrate ring faults.

  12. Frictional `non-aging' of fault mirror surfaces?: Insight from friction experiments on Carrara marble

    NASA Astrophysics Data System (ADS)

    Park, Y.; Ree, J. H.; Hirose, T.

    2016-12-01

    Mirror-like fault surfaces (or fault mirror: FM) have recently been suggested as a precursor of unstable slip (thus indicative of seismic slip). Frictional aging of fault surfaces (increase in static friction during interseismic period) is a common phenomenon of fault surfaces, resulting from increase in contact area or in bond strength between asperities with time. Despite the importance of FM in earthquake faulting, the frictional-aging behavior of FM has never been studied. To understand the frictional-aging behavior of FM, slide-hold-slide friction experiments were done on carbonate FM and powdered gouge of former carbonate FM (PG hereafter) using low-to-high-velocity-rotary-shear apparatus, at a slip rate of 1 μm s-1 a normal stress of 1.5 MPa, room temperature and room humidity condition. The sheared PG specimens showed a logarithmic positive relationship between static friction and holding time, consistent with Dieterich-type healing behavior. In contrast, the sheared FM specimens showed little effect of holding time on static friction. The slip surface of FM specimens consists of densely-packed and sintered nano-particles while that of PG specimens is composed of loose nano-particles. It has been known that yield strength of a material increases dramatically with size-decreasing grains being nano-particles. Since FM is a layer of densely-packed and sintered nanoparticles, enhanced strength of FM may inhibit growth of real contact area of fault surfaces during hold time. Furthermore, sintered particles composing FM have less pore space than loose gouge layer, and thus there would be a less chance of strengthening by pore space reduction, inter-particle meniscus formation or water adsorption onto the particles surface in the FM layer. Our preliminary result suggests that carbonate FM's may impede the recovery of fault strength during interseismic period, resulting in less possibility of earthquake nucleation. Reduced frictional healing may be a common

  13. Norfolk State University Research Experience in Earth System Science

    NASA Technical Reports Server (NTRS)

    Chaudhury, Raj

    2002-01-01

    The truly interdisciplinary nature of Earth System Science lends itself to the creation of research teams comprised of people with different scientific and technical backgrounds. In the annals of Earth System Science (ESS) education, the lack of an academic major in the discipline might be seen as a barrier to the involvement of undergraduates in the overall ESS-enterprise. This issue is further compounded at minority-serving institutions by the rarity of departments dedicated to Atmospheric Science, Oceanography or even the geosciences. At Norfolk State University, a Historically Black College, a six week, NASA-supported, summer undergraduate research program (REESS - Research Experience in Earth System Science) is creating a model that involves students with majors in diverse scientific disciplines in authentic ESS research coupled with a structured education program. The project is part of a wider effort at the University to enhance undergraduate education by identifying specific areas of student weaknesses regarding the content and process of science. A pre- and post-assessment test, which is focused on some fundamental topics in global climate change, is given to all participants as part of the evaluation of the program. Student attitudes towards the subject and the program's approach are also surveyed at the end of the research experience. In 2002, 11 undergraduates participated in REESS and were educated in the informed use of some of the vast remote sensing resources available through NASA's Earth Science Enterprise (ESE). The program ran from June 3rd through July 12, 2002. This was the final year of the project.

  14. Faulting of rocks in three-dimensional strain fields I. Failure of rocks in polyaxial, servo-control experiments

    Reches, Z.; Dieterich, J.H.

    1983-01-01

    The dependence of the number of sets of faults and their orientation on the intermediate strain axis is investigated through polyaxial tests, reported here, and theoretical analysis, reported in an accompanying paper. In the experiments, cubic samples of Berea sandstone, Sierra-White and Westerly granites, and Candoro and Solnhofen limestones were loaded on their three pairs of faces by three independent, mutually perpendicular presses at room temperature. Two of the presses were servo-controlled and applied constant displacement rates throughout the experiment. Most samples display three or four sets of faults in orthorhombic symmetry. These faults form in several yielding events that follow a stage of elastic deformation. In many experiments, the maximum and the intermediate compressive stresses interchange orientations during the yielding events, where the corresponding strains are constant. The final stage of most experiments is characterized by slip along the faults. ?? 1983.

  15. Faulting of rocks in three-dimensional strain fields I. Failure of rocks in polyaxial, servo-control experiments

    NASA Astrophysics Data System (ADS)

    Reches, Ze'ev; Dieterich, James H.

    1983-05-01

    The dependence of the number of sets of faults and their orientation on the intermediate strain axis is investigated through polyaxial tests, reported here, and theoretical analysis, reported in an accompanying paper. In the experiments, cubic samples of Berea sandstone, Sierra-White and Westerly granites, and Candoro and Solnhofen limestones were loaded on their three pairs of faces by three independent, mutually perpendicular presses at room temperature. Two of the presses were servo-controlled and applied constant displacement rates throughout the experiment. Most samples display three or four sets of faults in orthorhombic symmetry. These faults form in several yielding events that follow a stage of elastic deformation. In many experiments, the maximum and the intermediate compressive stresses interchange orientations during the yielding events, where the corresponding strains are constant. The final stage of most experiments is characterized by slip along the faults.

  16. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  17. Delivery of extraterrestrial amino acids to the primitive Earth. Exposure experiments in Earth orbit.

    PubMed

    Barbier, B; Bertrand, M; Boillot, F; Chabin, A; Chaput, D; Henin, O; Brack, A

    1998-06-01

    A large collection of micrometeorites has been recently extracted from Antarctic old blue ice. In the 50 to 100 micrometers size range, the carbonaceous micrometeorites represent 80% of the samples and contain 2% of carbon. They might have brought more carbon to the surface of the primitive Earth than that involved in the present surficial biomass. Amino acids such as "-amino isobutyric acid have been identified in these Antarctic micrometeorites. Enantiomeric excesses of L-amino acids have been detected in the Murchison meteorite. A large fraction of homochiral amino acids might have been delivered to the primitive Earth via meteorites and micrometeorites. Space technology in Earth orbit offers a unique opportunity to study the behaviour of amino acids required for the development of primitive life when they are exposed to space conditions, either free or associated with tiny mineral grains mimicking the micrometeorites. Our objectives are to demonstrate that porous mineral material protects amino acids in space from photolysis and racemization (the conversion of L-amino acids into a mixture of L- and D-molecules) and to test whether photosensitive amino acids derivatives can polymerize in mineral grains under space conditions. The results obtained in BIOPAN-1 and BIOPAN-2 exposure experiments on board unmanned satellite FOTON are presented.

  18. Kinematics of fault-related folding derived from a sandbox experiment

    NASA Astrophysics Data System (ADS)

    Bernard, Sylvain; Avouac, Jean-Philippe; Dominguez, StéPhane; Simoes, Martine

    2007-03-01

    We analyze the kinematics of fault tip folding at the front of a fold-and-thrust wedge using a sandbox experiment. The analog model consists of sand layers intercalated with low-friction glass bead layers, deposited in a glass-sided experimental device and with a total thickness h = 4.8 cm. A computerized mobile backstop induces progressive horizontal shortening of the sand layers and therefore thrust fault propagation. Active deformation at the tip of the forward propagating basal décollement is monitored along the cross section with a high-resolution CCD camera, and the displacement field between pairs of images is measured from the optical flow technique. In the early stage, when cumulative shortening is less than about h/10, slip along the décollement tapers gradually to zero and the displacement gradient is absorbed by distributed deformation of the overlying medium. In this stage of detachment tip folding, horizontal displacements decrease linearly with distance toward the foreland. Vertical displacements reflect a nearly symmetrical mode of folding, with displacements varying linearly between relatively well defined axial surfaces. When the cumulative slip on the décollement exceeds about h/10, deformation tends to localize on a few discrete shear bands at the front of the system, until shortening exceeds h/8 and deformation gets fully localized on a single emergent frontal ramp. The fault geometry subsequently evolves to a sigmoid shape and the hanging wall deforms by simple shear as it overthrusts the flat ramp system. As long as strain localization is not fully established, the sand layers experience a combination of thickening and horizontal shortening, which induces gradual limb rotation. The observed kinematics can be reduced to simple analytical expressions that can be used to restore fault tip folds, relate finite deformation to incremental folding, and derive shortening rates from deformed geomorphic markers or growth strata.

  19. Flagstaff, Arizona seen in Earth Resources Experiments package

    1974-02-01

    SL4-93-067 (16 Nov. 1973-8 Feb. 1974) --- A spectacular winter view of the Flagstaff, Arizona area is seen in this Skylab 4 Earth Resources Experiments package S190-B (five-inch earth terrain camera) infrared photograph taken from the Skylab space station in Earth orbit. Included in the scene are the San Francisco Mountains, Oak Creek Canyon, Painted Desert and Meteor Crater. The infrared picture depicts in red living vegetation, in white the snow, and in bright blue the water. Major features identified in this photograph are Humphrey's peak, top center, Flagstaff at foot of the peak, Sunset Crater volcanic field with numerous vents and craters right of Flagstaff and Meteor Crater (right center). Within the mountainous areas several clear areas generally rectangular are visible and represent the areas where lumbering has removed the forest. The thin white line extending from left corner to Sunset Crater fields is the power transmission line cleared area. Roads are subdued and are not easily visible. Photo credit: NASA

  20. Earth Radiation Budget Experiment scanner radiometric calibration results

    NASA Technical Reports Server (NTRS)

    Lee, Robert B., III; Gibson, M. A.; Thomas, Susan; Meekins, Jeffrey L.; Mahan, J. R.

    1990-01-01

    The Earth Radiation Budget Experiment (ERBE) scanning radiometers are producing measurements of the incoming solar, earth/atmosphere-reflected solar, and earth/atmosphere-emitted radiation fields with measurement precisions and absolute accuracies, approaching 1 percent. ERBE uses thermistor bolometers as the detection elements in the narrow-field-of-view scanning radiometers. The scanning radiometers can sense radiation in the shortwave, longwave, and total broadband spectral regions of 0.2 to 5.0, 5.0 to 50.0, and 0.2 to 50.0 micrometers, respectively. Detailed models of the radiometers' response functions were developed in order to design the most suitable calibration techniques. These models guided the design of in-flight calibration procedures as well as the development and characterization of a vacuum-calibration chamber and the blackbody source which provided the absolute basis upon which the total and longwave radiometers were characterized. The flight calibration instrumentation for the narror-field-of-view scanning radiometers is presented and evaluated.

  1. Strain rate effect on fault slip and rupture evolution: Insight from meter-scale rock friction experiments

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Yamashita, Futoshi; Mizoguchi, Kazuo; Takizawa, Shigeru; Kawakata, Hironori

    2018-05-01

    We conduct meter-scale rock friction experiments to study strain rate effect on fault slip and rupture evolution. Two rock samples made of Indian metagabbro, with a nominal contact dimension of 1.5 m long and 0.1 m wide, are juxtaposed and loaded in a direct shear configuration to simulate the fault motion. A series of experimental tests, under constant loading rates ranging from 0.01 mm/s to 1 mm/s and under a fixed normal stress of 6.7 MPa, are performed to simulate conditions with changing strain rates. Load cells and displacement transducers are utilized to examine the macroscopic fault behavior, while high-density arrays of strain gauges close to the fault are used to investigate the local fault behavior. The observations show that the macroscopic peak strength, strength drop, and the rate of strength drop can increase with increasing loading rate. At the local scale, the observations reveal that slow loading rates favor generation of characteristic ruptures that always nucleate in the form of slow slip at about the same location. In contrast, fast loading rates can promote very abrupt rupture nucleation and along-strike scatter of hypocenter locations. At a given propagation distance, rupture speed tends to increase with increasing loading rate. We propose that a strain-rate-dependent fault fragmentation process can enhance the efficiency of fault healing during the stick period, which together with healing time controls the recovery of fault strength. In addition, a strain-rate-dependent weakening mechanism can be activated during the slip period, which together with strain energy selects the modes of fault slip and rupture propagation. The results help to understand the spectrum of fault slip and rock deformation modes in nature, and emphasize the role of heterogeneity in tuning fault behavior under different strain rates.

  2. Results of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) Experiment

    NASA Technical Reports Server (NTRS)

    Wilson, K. E.; Leatherman, P. R.; Cleis, R.; Spinhirne, J.; Fugate, R. Q.

    1997-01-01

    Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes. Phase I of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) experiment demonstrated the first propagation of an atmosphere-compensated laser beam to the lunar retroreflectors. A 1.06-micron Nd:YAG laser beam was propagated through the full aperture of the 1.5-m telescope at the Starfire Optical Range (SOR), Kirtland Air Force Base, New Mexico, to the Apollo 15 retroreflector array at Hadley Rille. Laser guide-star adaptive optics were used to compensate turbulence-induced aberrations across the transmitter's 1.5-m aperture. A 3.5-m telescope, also located at the SOR, was used as a receiver for detecting the return signals. JPL-supplied Chebyshev polynomials of the retroreflector locations were used to develop tracking algorithms for the telescopes. At times we observed in excess of 100 photons returned from a single pulse when the outgoing beam from the 1.5-m telescope was corrected by the adaptive optics system. No returns were detected when the outgoing beam was uncompensated. The experiment was conducted from March through September 1994, during the first or last quarter of the Moon.

  3. Frictional and hydrologic behavior of the San Andreas Fault: Insights from laboratory experiments on SAFOD cuttings and core

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Marone, C.; Saffer, D. M.

    2010-12-01

    The debate concerning the apparent low strength of tectonic faults, including the San Andreas Fault (SAF), continues to focus on: 1) low intrinsic friction resulting from mineralogy and/or fabric, and 2) decreased effective normal stress due to elevated pore pressure. Here we inform this debate with laboratory measurements of the frictional behavior and permeability of cuttings and core returned from the SAF at a vertical depth of 2.7 km. We conducted experiments on cuttings and core recovered during SAFOD Phase III drilling. All samples in this study are adjacent to and within the active fault zone penetrated at 10814.5 ft (3296m) measured depth in the SAFOD borehole. We sheared gouge samples composed of drilling cuttings in a double-direct shear configuration subject to true-triaxial loading under constant effective normal stress, confining pressure, and pore pressure. Intact wafers of material were sheared in a single-direct shear configuration under similar conditions of effective stress, confining pressure, and pore pressure. We also report on permeability measurements on intact wafers of wall rock and fault gouge prior to shearing. Initial results from experiments on cuttings show: 1) a weak fault (µ=~0.21) compared to the surrounding wall rock (µ=~0.35), 2) velocity strengthening behavior, (a-b > 0), consistent with aseismic slip, and 3) near zero healing rates in material from the active fault. XRD analysis on cuttings indicates the main mineralogical difference between fault rock and wall rock, is the presence of significant amounts of smectite within the fault rock. Taken together, the measured frictional behavior and clay mineral content suggest that the clay composition exhibits a basic control on fault behavior. Our results document the first direct evidence of weak material from an active fault at seismogenic depths. In addition, our results could explain why the SAF in central California fails aseismically and hosts only small earthquakes.

  4. Controls on mid-ocean ridge segmentation and transform fault formation from laboratory experiments using fluids of complex rheology.

    NASA Astrophysics Data System (ADS)

    Sibrant, A.; Mittelstaedt, E. L.; Davaille, A.

    2017-12-01

    Mid-ocean ridges are tectonically segmented at scales of 10s to 100s of kilometers by several types of offsets including transform faults (TF), overlapping spreading centers (OSC), and slow-spreading non-transform offsets (NTO). Differences in segmentation along axis have been attributed to changes in numerous processes including magma supply from the upwelling mantle, viscous flow in the asthenosphere, ridge migration, and plate spreading direction. The wide variety of proposed mechanisms demonstrate that the origin of tectonic offsets and their relationship to segment-scale magmatic processes remain actively debated; each of the above processes, however, invoke combinations of tectonic and magmatic processes to explain changes in segmentation. To address the role of tectonic deformation and magmatic accretion on the development of ridge offsets, we present a series of analogue experiments using colloidal silica dispersions as an Earth analogue. Saline water solutions placed in contact with these fluids, cause formation of a skin through salt diffusion, whose rheology evolves from purely viscous to elastic and brittle with increasing salinity. Experiments are performed in a Plexiglas tank with two Plexiglas plates suspended above the base of the tank. The tank is filled with the colloidal fluid to just above the suspended plates, a thin layer of saline water is spread across the surface, and spreading initiated by moving the suspended Plexiglas plates apart at a fixed rate. Results show formation of OSCs, NTOs, and TFs. For parameters corresponding to the Earth, TF offsets are < 5 mm and form at all spreading velocities, corresponding to transform offsets of < 100 km on Earth. Measured TF offset size and ridge segment lengths exhibit a Poisson-type distribution with no apparent dependence on spreading rate. Observations of TF offset size on Earth show a similar distribution for TFs <100 km long and supports the hypothesis that TFform spontaneously through a

  5. Earth-to-Orbit Beamed Energy eXperiment (EBEX)

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Montgomery, Edward E.

    2017-01-01

    As a means of primary propulsion, beamed energy propulsion offers the benefit of offloading much of the propulsion system mass from the vehicle, increasing its potential performance and freeing it from the constraints of the rocket equation. For interstellar missions, beamed energy propulsion is arguably the most viable in the near- to mid-term. A near-term demonstration showing the feasibility of beamed energy propulsion is necessary and, fortunately, feasible using existing technologies. Key enabling technologies are 1) large area, low mass spacecraft and 2) efficient and safe high power laser systems capable of long distance propagation. NASA is currently developing the spacecraft technology through the Near Earth Asteroid Scout solar sail mission and has signed agreements with the Planetary Society to study the feasibility of precursor laser propulsion experiments using their LightSail-2 solar sail spacecraft. The capabilities of Space Situational Awareness assets and the advanced analytical tools available for fine resolution orbit determination now make it possible to investigate the practicalities of an Earth-to-orbit Beamed Energy eXperiment (EBEX) - a demonstration at delivered power levels that only illuminate a spacecraft without causing damage to it. The degree to which this can be expected to produce a measurable change in the orbit of a low ballistic coefficient spacecraft is investigated. Key system characteristics and estimated performance are derived for a near term mission opportunity involving the LightSail-2 spacecraft and laser power levels modest in comparison to those proposed previously. A more detailed investigation of accessing LightSail-2 from Santa Rosa Island on Eglin Air Force Base on the United States coast of the Gulf of Mexico is provided to show expected results in a specific case. While the technology demonstrated by such an experiment is not sufficient to enable an interstellar precursor mission, it is a first step toward that

  6. Coseismic Damage Generation in Fault Zones by Successive High Strain Rate Loading Experiments

    NASA Astrophysics Data System (ADS)

    Aben, F. M.; Doan, M. L.; Renard, F.; Toussaint, R.; Reuschlé, T.; Gratier, J. P.

    2014-12-01

    Damage zones of active faults control both resistance to rupture and transport properties of the fault. Hence, knowing the rock damage's origin is important to constrain its properties. Here we study experimentally the damage generated by a succession of dynamic loadings, a process mimicking the stress history of a rock sample located next to an active fault. A propagating rupture generates high frequency stress perturbations next to its tip. This dynamic loading creates pervasive damage (pulverization), as multiple fractures initiate and grow simultaneously. Previous single loading experiments have shown a strain rate threshold for pulverization. Here, we focus on conditions below this threshold and the dynamic peak stress to constrain: 1) if there is dynamic fracturing at these conditions and 2) if successive loadings (cumulative seismic events) result in pervasive fracturing, effectively reducing the pulverization threshold to milder conditions. Monzonite samples were dynamically loaded (strain rate > 50 s-1) several times below the dynamic peak strength, using a Split Hopkinson Pressure Bar apparatus. Several quasi-static experiments were conducted as well (strain rate < 10-5-s). Samples loaded up to stresses above the quasi-static uniaxial compressive strength (qsUCS) systematically fragmented or pulverized after four successive loadings. We measured several damage proxies (P-wave velocity, porosity), that show a systematic increase in damage with each load. In addition, micro-computed tomography acquisition on several damage samples revealed the growth of a pervasive fracture network between ensuing loadings. Samples loaded dynamically below the qsUCS failed along one fracture after a variable amount of loadings and damage proxies do not show any a systematic trend. Our conclusions is that milder dynamic loading conditions, below the dynamic peak strength, result in pervasive dynamic fracturing. Also, successive loadings effectively lower the pulverization

  7. Reply to comments by Ahmad et al. on: Shah, A. A., 2013. Earthquake geology of Kashmir Basin and its implications for future large earthquakes International Journal of Earth Sciences DOI:10.1007/s00531-013-0874-8 and on Shah, A. A., 2015. Kashmir Basin Fault and its tectonic significance in NW Himalaya, Jammu and Kashmir, India, International Journal of Earth Sciences DOI:10.1007/s00531-015-1183-1

    NASA Astrophysics Data System (ADS)

    Shah, A. A.

    2016-03-01

    Shah (Int J Earth Sci 102:1957-1966, 2013) mapped major unknown faults and fault segments in Kashmir basin using geomorphological techniques. The major trace of out-of-sequence thrust fault was named as Kashmir basin fault (KBF) because it runs through the middle of Kashmir basin, and the active movement on it has backtilted and uplifted most of the basin. Ahmad et al. (Int J Earth Sci, 2015) have disputed the existence of KBF and maintained that faults identified by Shah (Int J Earth Sci 102:1957-1966, 2013) were already mapped as inferred faults by earlier workers. The early works, however, show a major normal fault, or a minor out-of-sequence reverse fault, and none have shown a major thrust fault.

  8. Development of the self-learning machine for creating models of microprocessor of single-phase earth fault protection devices in networks with isolated neutral voltage above 1000 V

    NASA Astrophysics Data System (ADS)

    Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.

    2018-02-01

    The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.

  9. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  10. The Surface faulting produced by the 30 October 2016 Mw 6.5 Central Italy earthquake: the Open EMERGEO Working Group experience

    NASA Astrophysics Data System (ADS)

    Pantosti, Daniela

    2017-04-01

    The October 30, 2016 (06:40 UTC) Mw 6.5 earthquake occurred about 28 km NW of Amatrice village as the result of upper crust normal faulting on a nearly 30 km-long, NW-SE oriented, SW dipping fault system in the Central Apennines. This earthquake is the strongest Italian seismic event since the 1980 Mw 6.9 Irpinia earthquake. The Mw 6.5 event was the largest shock of a seismic sequence, which began on August 24 with a Mw 6.0 earthquake and also included a Mw 5.9 earthquake on October 26, about 9 and 35 km NW of Amatrice village, respectively. Field surveys of coseismic geological effects at the surface started within hours of the mainshock and were carried out by several national and international teams of earth scientists (about 120 people) from different research institutions and universities coordinated by the EMERGEO Working Group of the Istituto Nazionale di Geofisica e Vulcanologia. This collaborative effort was focused on the detailed recognition and mapping of: 1) the total extent of the October 30 coseismic surface ruptures, 2) their geometric and kinematic characteristics, 3) the coseismic displacement distribution along the activated fault system, including subsidiary and antithetic ruptures. The huge amount of collected data (more than 8000 observation points of several types of coseismic effects at the surface) were stored, managed and shared using a specifically designed spreadsheet to populate a georeferenced database. More comprehensive mapping of the details and extent of surface rupture was facilitated by Structure-from-Motion photogrammetry surveys by means of several helicopter flights. An almost continuous alignment of ruptures about 30 km long, N150/160 striking, mainly SW side down was observed along the already known active Mt. Vettore - Mt. Bove fault system. The mapped ruptures occasionally overlapped those of the August 24 Mw 6.0 and October 26 Mw 5.9 shocks. The coincidence between the observed surface ruptures and the trace of active

  11. The earth radiation budget experiment: Early validation results

    NASA Astrophysics Data System (ADS)

    Smith, G. Louis; Barkstrom, Bruce R.; Harrison, Edwin F.

    The Earth Radiation Budget Experiment (ERBE) consists of radiometers on a dedicated spacecraft in a 57° inclination orbit, which has a precessional period of 2 months, and on two NOAA operational meteorological spacecraft in near polar orbits. The radiometers include scanning narrow field-of-view (FOV) and nadir-looking wide and medium FOV radiometers covering the ranges 0.2 to 5 μm and 5 to 50 μm and a solar monitoring channel. This paper describes the validation procedures and preliminary results. Each of the radiometer channels underwent extensive ground calibration, and the instrument packages include in-flight calibration facilities which, to date, show negligible changes of the instruments in orbit, except for gradual degradation of the suprasil dome of the shortwave wide FOV (about 4% per year). Measurements of the solar constant by the solar monitors, wide FOV, and medium FOV radiometers of two spacecraft agree to a fraction of a percent. Intercomparisons of the wide and medium FOV radiometers with the scanning radiometers show agreement of 1 to 4%. The multiple ERBE satellites are acquiring the first global measurements of regional scale diurnal variations in the Earth's radiation budget. These diurnal variations are verified by comparison with high temporal resolution geostationary satellite data. Other principal investigators of the ERBE Science Team are: R. Cess, SUNY, Stoneybrook; J. Coakley, NCAR; C. Duncan, M. King and A Mecherikunnel, Goddard Space Flight Center, NASA; A. Gruber and A.J. Miller, NOAA; D. Hartmann, U. Washington; F.B. House, Drexel U.; F.O. Huck, Langley Research Center, NASA; G. Hunt, Imperial College, London U.; R. Kandel and A. Berroir, Laboratory of Dynamic Meteorology, Ecole Polytechique; V. Ramanathan, U. Chicago; E. Raschke, U. of Cologne; W.L. Smith, U. of Wisconsin and T.H. Vonder Haar, Colorado State U.

  12. Experiment study on an inductive superconducting fault current limiter using no-insulation coils

    NASA Astrophysics Data System (ADS)

    Qiu, D.; Li, Z. Y.; Gu, F.; Huang, Z.; Zhao, A.; Hu, D.; Wei, B. G.; Huang, H.; Hong, Z.; Ryu, K.; Jin, Z.

    2018-03-01

    No-insulation (NI) coil made of 2 G high temperature superconducting (HTS) tapes has been widely used in DC magnet due to its excellent performance of engineering current density, thermal stability and mechanical strength. However, there are few AC power device using NI coil at present. In this paper, the NI coil is firstly applied into inductive superconducting fault current limiter (iSFCL). A two-winding structure air-core iSFCL prototype was fabricated, composed of a primary copper winding and a secondary no-insulation winding using 2 G HTS coated conductors. Firstly, in order to testify the feasibility to use NI coil as the secondary winding, the impedance variation of the prototype at different currents and different cycles was tested. The result shows that the impedance increases rapidly with the current rises. Then the iSFCL prototype was tested in a 40 V rms/ 3.3 kA peak short circuit experiment platform, both of the fault current limiting and recovery property of the iSFCL are discussed.

  13. Unified law of evolution of experimental gouge-filled fault for fast and slow slip events at slider frictional experiments

    NASA Astrophysics Data System (ADS)

    Ostapchuk, Alexey; Saltykov, Nikolay

    2017-04-01

    Excessive tectonic stresses accumulated in the area of rock discontinuity are released while a process of slip along preexisting faults. Spectrum of slip modes includes not only creeps and regular earthquakes but also some transitional regimes - slow-slip events, low-frequency and very low-frequency earthquakes. However, there is still no agreement in Geophysics community if such fast and slow events have mutual nature [Peng, Gomberg, 2010] or they present different physical phenomena [Ide et al., 2007]. Models of nucleation and evolution of fault slip events could be evolved by laboratory experiments in which regularities of shear deformation of gouge-filled fault are investigated. In the course of the work we studied deformation regularities of experimental fault by slider frictional experiments for development of unified law of evolution of fault and revelation of its parameters responsible for deformation mode realization. The experiments were conducted as a classic slider-model experiment, in which block under normal and shear stresses moves along interface. The volume between two rough surfaces was filled by thin layer of granular matter. Shear force was applied by a spring which deformed with a constant rate. In such experiments elastic energy was accumulated in the spring, and regularities of its releases were determined by regularities of frictional behaviour of experimental fault. A full spectrum of slip modes was simulated in laboratory experiments. Slight change of gouge characteristics (granule shape, content of clay), viscosity of interstitial fluid and level of normal stress make it possible to obtained gradual transformation of the slip modes from steady sliding and slow slip to regular stick-slip, with various amplitude of 'coseismic' displacement. Using method of asymptotic analogies we have shown that different slip modes can be specified in term of single formalism and preparation of different slip modes have uniform evolution law. It is shown

  14. Mission Preparation Program for Exobiological Experiments in Earth Orbit

    NASA Astrophysics Data System (ADS)

    Panitz, Corinna; Reitz, Guenther; Horneck, Gerda; Rabbow, Elke; Rettberg, Petra

    The ESA facilities EXPOSE-R and EXPOSE-E on board of the the International Space Station ISS provide the technology for exposing chemical and biological samples in a controlled manner to outer space parameters, such as high vacuum, intense radiation of galactic and solar origin and microgravity. EXPOSE-E has been attached to the outer balcony of the European Columbus module of the ISS in Febraury 2008 and will stay for about 1 year in space, EXPOSE-R will be attached to the Russian Svezda module of the ISS in fall 2008. The EXPOSE facilities are a further step in the study of the Responses of Organisms to Space Environment (ROSE concortium). The results from the EXPOSE missions will give new insights into the survivability of terrestrial organisms in space and will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin.To test the compatibility of the different biological and chemical systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed. It resulted in several experiment verification tests EVTs and an experiment sequence test EST that were conducted in the carefully equipped and monitored planetary and space simulation facilities PSI of the Institute of Aerospace Medicine at DLR in Cologne, Germany. These ground based pre-flight studies allow the investigation of a much wider variety of samples and the selection of the most promising organisms for the flight experiment. The procedure and results of these EVT tests and EST will be presented. These results are an essential prerequisite for the success of the EXPOSE missions and have been done in parallel with the development and construction of the final hardware design of the facility. The results gained during the simulation experiments demonstrated mission

  15. Addressing Earth Science Data Access Challenges through User Experience Research

    NASA Astrophysics Data System (ADS)

    Hemmings, S. N.; Banks, B.; Kendall, J.; Lee, C. M.; Irwin, D.; Toll, D. L.; Searby, N. D.

    2013-12-01

    The NASA Capacity Building Program (Earth Science Division, Applied Sciences Program) works to enhance end-user capabilities to employ Earth observation and Earth science (EO/ES) data in decision-making. Open data access and user-tailored data delivery strategies are critical elements towards this end. User Experience (UX) and User Interface (UI) research methods can offer important contributions towards addressing data access challenges, particularly at the interface of science application/product development and product transition to end-users. This presentation focuses on developing nation contexts and describes methods, results, and lessons learned from two recent UX/UI efforts conducted in collaboration with NASA: the SERVIRglobal.net redesign project and the U.S. Water Partnership (USWP) Portal development effort. SERVIR, a collaborative venture among NASA, USAID, and global partners, seeks to improve environmental management and climate change response by helping governments and other stakeholders integrate EO and geospatial technologies into decision-making. The USWP, a collaboration among U.S. public and private sectors, harnesses U.S.-based resources and expertise to address water challenges in developing nations. SERVIR's study, conducted from 2010-2012, assessed and tested user needs, preferences, and online experiences to generate a more user-friendly online data portal at SERVIRglobal.net. The portal provides a central access interface to data and products from SERVIR's network of hubs in East Africa, the Hindu Kush Himalayas, and Mesoamerica. The second study, conducted by the USWP Secretariat and funded by the U.S. Department of State, seeks to match U.S.-based water information resources with developing nation stakeholder needs. The USWP study utilizes a multi-pronged approach to identify key design requirements and to understand the existing water data portal landscape. Adopting UX methods allows data distributors to design customized UIs that

  16. The Ural-Herirud transcontinental postcollisional strike-slip fault and its role in the formation of the Earth's crust

    NASA Astrophysics Data System (ADS)

    Leonov, Yu. G.; Volozh, Yu. A.; Antipov, M. P.; Kheraskova, T. N.

    2015-11-01

    The paper considers the morphology, deep structure, and geodynamic features of the Ural-Herirud postorogenic strike-slip fault (UH fault), along which the Moho (the "M") shifts along the entire axial zone of the Ural Orogen, then further to the south across the Scythian-Turan Plate to the Herirud sublatitudinal fault in Afghanistan. The postcollisional character of dextral displacements along the Ural-Herirud fault and its Triassic-Jurassic age are proven. We have estimated the scale of displacements and made an attempt to make a paleoreconstruction, illustrating the relationship between the Variscides of the Urals and the Tien Shan before tectonic displacements. The analysis of new data includes the latest generation of 1: 200000 geological maps and the regional seismic profiling data obtained in the most elevated part of the Urals (from the seismic profile of the Middle Urals in the north to the Uralseis seismic profile in the south), as well as within the sedimentary cover of the Turan Plate, from Mugodzhary to the southern boundaries of the former water area of the Aral Sea. General typomorphic signs of transcontinental strike-slip fault systems are considered and the structural model of the Ural-Herirud postcollisional strike-slip fault is presented.

  17. Seismic and aseismic fault slip in response to fluid injection observed during field experiments at meter scale

    NASA Astrophysics Data System (ADS)

    Cappa, F.; Guglielmi, Y.; De Barros, L.; Wynants-Morel, N.; Duboeuf, L.

    2017-12-01

    During fluid injection, the observations of an enlarging cloud of seismicity are generally explained by a direct response to the pore pressure diffusion in a permeable fractured rock. However, fluid injection can also induce large aseismic deformations which provide an alternative mechanism for triggering and driving seismicity. Despite the importance of these two mechanisms during fluid injection, there are few studies on the effects of fluid pressure on the partitioning between seismic and aseismic motions under controlled field experiments. Here, we describe in-situ meter-scale experiments measuring synchronously the fluid pressure, the fault motions and the seismicity directly in a fault zone stimulated by controlled fluid injection at 280 m depth in carbonate rocks. The experiments were conducted in a gallery of an underground laboratory in south of France (LSBB, http://lsbb.eu). Thanks to the proximal monitoring at high-frequency, our data show that the fluid overpressure mainly induces a dilatant aseismic slip (several tens of microns up to a millimeter) at the injection. A sparse seismicity (-4 < Mw < -3) is observed several meters away from the injection, in a part of the fault zone where the fluid overpressure is null or very low. Using hydromechanical modeling with friction laws, we simulated an experiment and investigated the relative contribution of the fluid pressure diffusion and stress transfer on the seismic and aseismic fault behavior. The model reproduces the hydromechanical data measured at injection, and show that the aseismic slip induced by fluid injection propagates outside the pressurized zone where accumulated shear stress develops, and potentially triggers seismicity. Our models also show that the permeability enhancement and friction evolution are essential to explain the fault slip behavior. Our experimental results are consistent with large-scale observations of fault motions at geothermal sites (Wei et al., 2015; Cornet, 2016), and

  18. Development of N-version software samples for an experiment in software fault tolerance

    NASA Technical Reports Server (NTRS)

    Lauterbach, L.

    1987-01-01

    The report documents the task planning and software development phases of an effort to obtain twenty versions of code independently designed and developed from a common specification. These versions were created for use in future experiments in software fault tolerance, in continuation of the experimental series underway at the Systems Validation Methods Branch (SVMB) at NASA Langley Research Center. The 20 versions were developed under controlled conditions at four U.S. universities, by 20 teams of two researchers each. The versions process raw data from a modified Redundant Strapped Down Inertial Measurement Unit (RSDIMU). The specifications, and over 200 questions submitted by the developers concerning the specifications, are included as appendices to this report. Design documents, and design and code walkthrough reports for each version, were also obtained in this task for use in future studies.

  19. Science support for the Earth radiation budget experiment

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    1994-01-01

    The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.

  20. Closed-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1997-01-01

    ABSTRACT Closed-loop HIRF experiments were performed on a fault tolerant flight control computer (FCC) at the NASA Langley Research Center. The FCC used in the experiments was a quad-redundant flight control computer executing B737 Autoland control laws. The FCC was placed in one of the mode-stirred reverberation chambers in the HIRF Laboratory and interfaced to a computer simulation of the B737 flight dynamics, engines, sensors, actuators, and atmosphere in the Closed-Loop Systems Laboratory. Disturbances to the aircraft associated with wind gusts and turbulence were simulated during tests. Electrical isolation between the FCC under test and the simulation computer was achieved via a fiber optic interface for the analog and discrete signals. Closed-loop operation of the FCC enabled flight dynamics and atmospheric disturbances affecting the aircraft to be represented during tests. Upset was induced in the FCC as a result of exposure to HIRF, and the effect of upset on the simulated flight of the aircraft was observed and recorded. This paper presents a description of these closed- loop HIRF experiments, upset data obtained from the FCC during these experiments, and closed-loop effects on the simulated flight of the aircraft.

  1. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  2. Fluid pressure and fault strength: insights from load-controlled experiments on carbonate-bearing rocks

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Violay, M.; Nielsen, S. B.; Di Toro, G.

    2013-12-01

    Fluid pressure Pf has been indicated as a major factor controlling natural (e.g., L'Aquila, Italy, 2009 Mw 6.3) and induced seismicity (e.g., Wilzetta, Oklahoma, 2011 Mw 5.7). The Terzaghi's principle states that the effective normal stress σeff= σn (1- α Pf ), with α the Biot coefficient and σn the normal stress, is reduced in proportion to Pf. A value of α=1 is often used by default; however, within a complex fault core of inhomogeneous permeability, α may vary in a yet poorly understood way. To shed light on this problem, we conducted experiments on carbonate-bearing rock samples (Carrara marble) in room humidity conditions and in the presence of pore fluids (drained conditions), where a pre-cut fault is loaded by shear stress τ in a rotary apparatus (SHIVA) under constant σn=15 MPa. Two types of tests were performed with fluids: (1) the fluid pressure was kept constant at Pf=5 MPa (close to hydrostatic conditions at a depth of 0.5 km) and the fault was driven to failure instability by gradually increasing τ; (2) the fluid pressure was kept at Pf=5 MPa and τ was increased until close to instability (τ = 7 MPa): at this point Pf was raised of 0.5 MPa every 10 s up to Pf =10 MPa to induce a main (failure) instability. Assuming α=1 and an effective peak strength (τp)eff=μp σeff at failure, the experiments reveal that: 1) (τp)eff is sensitive to the shear loading rate: fast loading rates (0.5 MPa every 20 s) induce higher peak shear-stress values than slow loading rates (0.5 MPa every 40 s). Such effect is not observed (minor or inexistent) in the absence of pore fluids. 2) Under fast loading rates the (τp)eff may surpass that measured in the absence of pore fluids under identical effective normal stress σeff. 3) An increase of Pf does not necessarily induce the main instability (within the time intervals studied here, i.e. up to ~10 s) even if the effective strength threshold is largely surpassed (e.g., (τp)eff=1.3 μp σeff). We interpret these

  3. Magnetic behaviors of cataclasites within Wenchuan earthquake fault zone in heating experiments

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Li, H.; Sun, Z.; Chou, Y. M.; Cao, Y., Jr.; Huan, W.; Ye, X.; He, X.

    2017-12-01

    Previous rock magnetism of fault rocks were used to trace the frictional heating temperature, however, few studies are focus on different temperatures effect of rock magnetic properties. To investigate rock magnetic response to different temperature, we conducted heating experiments on cataclasites from the Wenchuan earthquake Fault Scientific Drilling borehole 2 (WFSD-2) cores. Samples of cataclasites were obtained using an electric drill with a 1 cm-diameter drill pipe from 580.65 m-depth. Experiments were performed by a Thermal-optical measurement system under argon atmosphere and elevated temperatures. Both microstructural observations and powder X-ray diffraction analyses show that feldspar and quartz start to melt at 1100 ° and 1300 ° respectively. Magnetic susceptibility values of samples after heating are higher than that before heating. Samples after heating at 700 and 1750 ° have the highest values of magnetic susceptibility. Rock magnetic measurements show that the main ferromagnetic minerals within samples heated below 1100 ° (400, 700, 900 and 1100 °) are magnetite, which is new-formed by transformation of paramagnetic minerals. The χferri results show that the quantity of magnetite is bigger at sample heated by 700° experiment than by 400, 900 and 1100° experiments. Based on the FORC diagrams, we consider that magnetite grains are getting finer from 400 to 900°, and growing coarser when heated from 900 to 1100 °. SEM-EDX results indicate that the pure iron are formed in higher temperature (1300, 1500 and 1750 °), which present as framboids with size <10 μm. Rock magnetic measurements imply pure iron is the main ferromagnetic materials in these heated samples. The amount and size of iron framboids increase with increasing temperature. Therefore, we conclude that the paramagnetic minerals are decomposed into fine magnetite, then to coarse-grained magnetite, finally to pure iron at super high temperature. New-formed magnetite contributes to

  4. The Crew Earth Observations Experiment: Earth System Science from the ISS

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Evans, Cynthia A.; Robinson, Julie A.; Wilkinson, M. Justin

    2007-01-01

    This viewgraph presentation reviews the use of Astronaut Photography (AP) as taken from the International Space Station (ISS) in Earth System Science (ESS). Included are slides showing basic remote sensing theory, data characteristics of astronaut photography, astronaut training and operations, crew Earth observations group, targeting sites and acquisition, cataloging and database, analysis and applications for ESS, image analysis of particular interest urban areas, megafans, deltas, coral reefs. There are examples of the photographs and the analysis.

  5. Laboratory experiments on liquid fragmentation during Earth's core formation

    NASA Astrophysics Data System (ADS)

    Landeau, M.; Deguen, R.; Olson, P.

    2013-12-01

    Buoyancy-driven fragmentation of one liquid in another immiscible liquid likely occurred on a massive scale during the formation of the Earth, when dense liquid metal blobs were released within deep molten silicate magma oceans. Another example of this phenomenon is the sudden release of petroleum into the ocean during the Deepwater Horizon disaster (Gulf of Mexico, 2010). We present experiments on the instability and fragmentation of blobs of a heavy liquid released into a lighter immiscible liquid. During the fragmentation process, we observe deformation of the released fluid, formation of filamentary structures, capillary instability, and eventually drop formation. We find that, at low and intermediate Weber numbers (which measures the importance of inertia versus surface tension), the fragmentation regime mainly results from the competition between a Rayleigh-Taylor instability and the roll-up of a vortex ring. At sufficiently high Weber numbers (the relevant regime for core formation), the fragmentation process becomes turbulent. The large-scale flow then behaves as a turbulent vortex ring or a turbulent thermal: it forms a coherent structure whose shape remains self-similar during the fall and which grows by turbulent entrainment of ambient fluid. An integral model based on the entrainment assumption, and adapted to buoyant vortex rings with initial momentum, is consistent with our experimental data. This indicates that the concept of turbulent entrainment is valid for non-dispersed immiscible fluids at large Weber and Reynolds numbers. Series of photographs, turbulent fragmentation regime, time intervals of about 0.2 s. Portions (red boxes) have been magnified (on the right).

  6. Land Use Planning Experiment for Introductory Earth Science Courses

    ERIC Educational Resources Information Center

    Fetter, C. W., Jr.; Hoffman, James I.

    1975-01-01

    Describes an activity which incorporates topographic map interpretation, soils analysis, hydrogeology, and local geology in a five-week series of exercises for an introductory college earth science class. (CP)

  7. Patterns in Crew-Initiated Photography of Earth from ISS - Is Earth Observation a Salutogenic Experience?

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Slack, Kelley J.; Olson, Valerie A.; Trenchard, Mike; Willis, Kim; Baskin, Pam; Ritsher, Jennifer Boyd

    2006-01-01

    To provide for the well-being of crewmembers on future exploration missions, understanding how space station crewmembers handle the inherently stressful isolation and confinement during long-duration missions is important. A recent retrospective survey of previously flown astronauts found that the most commonly reported psychologically enriching aspects of spaceflight had to do with their Perceptions of Earth. Crewmembers onboard the International Space Station (ISS) photograph Earth through the station windows. Some of these photographs are in response to requests from scientists on the ground through the Crew Earth Observations (CEO) payload. Other photographs taken by crewmembers have not been in response to these formal requests. The automatically recorded data from the camera provides a dataset that can be used to test hypotheses about factors correlated with self-initiated crewmember photography. The present study used objective in-flight data to corroborate the previous questionnaire finding and to further investigate the nature of voluntary Earth-Observation activity. We examined the distribution of photographs with respect to time, crew, and subject matter. We also determined whether the frequency fluctuated in conjunction with major mission events such as vehicle dockings, and extra-vehicular activities (EVAs, or spacewalks), relative to the norm for the relevant crew. We also examined the influence of geographic and temporal patterns on frequency of Earth photography activities. We tested the hypotheses that there would be peak photography intensity over locations of personal interest, and on weekends. From December 2001 through October 2005 (Expeditions 4-11) crewmembers took 144,180 photographs of Earth with time and date automatically recorded by the camera. Of the time-stamped photographs, 84.5% were crew-initiated, and not in response to CEO requests. Preliminary analysis indicated some phasing in patterns of photography during the course of a

  8. Vitrinite reflectance and Raman spectra of carbonaceous material as indicators of frictional heating on faults: Constraints from friction experiments

    NASA Astrophysics Data System (ADS)

    Furuichi, Hiroyuki; Ujiie, Kohtaro; Kouketsu, Yui; Saito, Tsubasa; Tsutsumi, Akito; Wallis, Simon

    2015-08-01

    Vitrinite reflectance (Ro) and Raman spectra of carbonaceous material (RSCM) are both widely used as indicators of the maximum attained temperatures in sedimentary and metamorphic rocks. However, the potential of these methods to estimate temperature increases associated with fault slip has not been closely studied. To examine this issue, friction experiments were conducted on a mixture of powdered clay-rich fault material and carbonaceous material (CM) at slip rates of 0.15 mm/s and 1.3 m/s in nitrogen (N2) gas with or without distilled water. After the experiments, we measured Ro and RSCM and compared to those in starting material. The results indicate that when fault material suffers rapid heating at >500 °C in ∼9 s at 1.3 m/s, Ro and the intensity ratio of D1 and D2 Raman bands of CM (ID2/ID1) markedly increase. Comminution with very small temperature rise in ∼32 min at 0.15 mm/s is responsible for very limited changes in Ro and ID2/ID1. Our results demonstrate that Ro and RSCM could be useful for the detection of frictional heating on faults when the power density is ≥0.52 MW/m2. However, the conventionally used Ro and RSCM geothermometers are inadequate for the estimation of peak temperature during seismic fault slip. The reaction kinetics incorporating the effects of rapid heating at high slip rates and studies of the original microtexture and composition of CM are required to establish a reliable thermometer for frictional heating on faults.

  9. Grain size distribution of fault rocks: implication from natural gouges and high velocity friction experiments

    NASA Astrophysics Data System (ADS)

    Yang, X.; Chen, J.; Duan, B.

    2011-12-01

    The grain size distribution (GSD) is considered as an important parameter for the characterization of fault rocks. The relative magnitude of energy radiated as seismic waves to fracture energy plays a fundamental role to influence earthquake rupture dynamics. Currently, the details of grain size reduction mechanism and energy-budget are not well known. Here we present GSD measurements on fault rocks (gouge and breccias) in the main slip zone associated with the Wenchuan earthquake happened on 12 May, 2008, and on the gouges produced by high velocity friction (HVF) experiments. High velocity friction experiments were carried out on air dry granitic powder with grain size of 150 - 300 μm at normal stress of 1.0 MPa, a slip rate of 1.0 m / s and slip distances from 10 m to 30 m. On log-log plots of N(r) versus equivalent radius, two distinct linear parts can be discriminated with their intersection at 1 - 2 μm, defined as critical radius rc. One of power-law regime spans about 4 decades from 4 μm to 16 mm and the other covers a range of 0.2 - 2.0 μm. Larger fractal dimension from 2.7 to 3.5 are obtained for larger grain size regime, while lower values ranging from 1.7 to 2.1 for smaller size one. This two-stage distribution means the GSD is not self-similar (scale invariant) and the dominant ways of reducing grain size may be different from one another. XRD data show that the content of quartz drops greatly or disappears at 0.5 - 0.25 μm. GSD of HVF experimental products demonstrates similar feature to natural gouges. For instance, they all show the two-stage GSD with 1 - 2 μm of critical radius rc. The grains with their sizes of less than 1 μm appear rounded edges and equiaxial shapes. A variation in grain shapes can be observed in the grains larger than 5 μm. Some implications could be obtained from the measurements and experiments. (1) rc corresponds to the average value of grinding limit of rock-forming minerals. Further grain size reducing could be

  10. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  11. SLSF in-reactor local fault safety experiment P4. Final report

    SciT

    Thompson, D. H.; Holland, J. W.; Braid, T. H.

    The Sodium Loop Safety Facility (SLSF), a major facility in the US fast-reactor safety program, has been used to simulate a variety of sodium-cooled fast reactor accidents. SLSF experiment P4 was conducted to investigate the behavior of a "worse-than-case" local fault configuration. Objectives of this experiment were to eject molten fuel into a 37-pin bundle of full-length Fast-Test-Reactor-type fuel pins form heat-generating fuel canisters, to characterize the severity of any molten fuel-coolant interaction, and to demonstrate that any resulting blockage could either be tolerated during continued power operation or detected by global monitors to prevent fuel failure propagation. The designmore » goal for molten fuel release was 10 to 30 g. Explusion of molten fuel from fuel canisters caused failure of adjacent pins and a partial flow channel blockage in the fuel bundle during full-power operation. Molten fuel and fuel debris also lodged against the inner surface of the test subassembly hex-can wall. The total fuel disruption of 310 g evaluated from posttest examination data was in excellent agreement with results from the SLSF delayed neutron detection system, but exceeded the target molten fuel release by an order of magnitude. This report contains a summary description of the SLSF in-reactor loop and support systems and the experiment operations. results of the detailed macro- and microexamination of disrupted fuel and metal and results from the analysis of the on-line experimental data are described, as are the interpretations and conclusions drawn from the posttest evaluations. 60 refs., 74 figs.« less

  12. Permeability Variations Associated With Fault Reactivation in a Claystone Formation Investigated by Field Experiments and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Jeanne, Pierre; Guglielmi, Yves; Rutqvist, Jonny; Nussbaum, Christophe; Birkholzer, Jens

    2018-02-01

    We studied the relation between rupture and changes in permeability within a fault zone intersecting the Opalinus Clay formation at 300 m depth in the Mont Terri Underground Research Laboratory (Switzerland). A series of water injection experiments were performed in a borehole straddle interval set within the damage zone of the main fault. A three-component displacement sensor allowed an estimation of the displacement of a minor fault plane reactivated during a succession of step rate pressure tests. The experiment reveals that the fault hydromechanical (HM) behavior is different from one test to the other with varying pressure levels needed to trigger rupture and different slip behavior under similar pressure conditions. Numerical simulations were performed to better understand the reason for such different behavior and to investigate the relation between rupture nucleation, permeability change, pressure diffusion, and rupture propagation. Our main findings are as follows: (i) a rate frictional law and a rate-and-state permeability law can reproduce the first test, but it appears that the rate constitutive parameters must be pressure dependent to reproduce the complex HM behavior observed during the successive injection tests; (ii) almost similar ruptures can create or destroy the fluid diffusion pathways; (iii) a too high or too low diffusivity created by the main rupture prevents secondary rupture events from occurring whereas "intermediate" diffusivity favors the nucleation of a secondary rupture associated with the fluid diffusion. However, because rupture may in certain cases destroy permeability, this succession of ruptures may not necessarily create a continuous hydraulic pathway.

  13. Earth observations and photography experiment: Summary of significant results

    NASA Technical Reports Server (NTRS)

    El-Baz, F.

    1978-01-01

    Observation and photographic data from the Apollo Soyuz Test Project are analyzed. The discussion is structured according to the fields of investigation including: geology, desert studies, oceanography, hydrology, and meteorology. The data were obtained by: (1) visual observations of selected Earth features, (2) hand-held camera photography to document observations, and (3) stereo mapping photography of areas of significant scientific interest.

  14. Mission requirements for a manned earth observatory. Volume 1, task 1: Experiment selection, definition, and documentation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Information related to proposed earth observation experiments for shuttle sortie missions (SSM) in the 1980's is presented. The step-wise progression of study activities and the development of the rationale that led to the identification, selection, and description of earth observation experiments for SSM are listed. The selected experiments are described, defined, and documented by individual disciplines. These disciplines include: oceanography; meteorology; agriculture, forestry, and rangeland; geology; hydrology; and environmental impact.

  15. On the Possibility of Estimation of the Earth Crust's Properties from the Observations of Electric Field of Electrokinetic Origin, Generated by Tidal Deformation within the Fault Zone

    NASA Astrophysics Data System (ADS)

    Alekseev, D. A.; Gokhberg, M. B.

    2018-05-01

    A 2-D boundary problem formulation in terms of pore pressure in Biot poroelasticity model is discussed, with application to a vertical contact model mechanically excited by a lunar-solar tidal deformation wave, representing a fault zone structure. A problem parametrization in terms of permeability and Biot's modulus contrasts is proposed and its numerical solution is obtained for a series of models differing in the values of the above parameters. The behavior of pore pressure and its gradient is analyzed. From those, the electric field of the electrokinetic nature is calculated. The possibilities of estimation of the elastic properties and permeability of geological formations from the observations of the horizontal and vertical electric field measured inside the medium and at the earth's surface near the block boundary are discussed.

  16. Susceptibility of experimental faults to pore pressure increase: insights from load-controlled experiments on calcite-bearing rocks

    NASA Astrophysics Data System (ADS)

    Spagnuolo, Elena; Violay, Marie; Nielsen, Stefan; Cornelio, Chiara; Di Toro, Giulio

    2017-04-01

    Fluid pressure has been indicated as a major factor controlling natural (e.g., L'Aquila, Italy, 2009 Mw 6.3) and induced seismicity (e.g., Wilzetta, Oklahoma, 2011 Mw 5.7). Terzaghi's principle states that the effective normal stress is linearly reduced by a pore pressure (Pf) increase σeff=σn(1 - αPf), where the effective stress parameter α, may be related to the fraction of the fault area that is flooded. A value of α =1 is often used by default, with Pf shifting the Mohr circle towards lower normal effective stresses and anticipating failure on pre-existing faults. However, within a complex fault core of inhomogeneous permeability, α may vary in a yet poorly understood way. To shed light on this problem, we conducted experiments on calcite-bearing rock samples (Carrara marble) at room humidity conditions and in the presence of pore fluids (drained conditions) using a rotary apparatus (SHIVA). A pre-cut fault is loaded by constant shear stress τ under constant normal stress σn=15 MPa until a target value corresponding roughly to the 80 % of the frictional fault strength. The pore pressure Pf is then raised with regular pressure and time steps to induce fault instability. Assuming α=1 and a threshold for instability τp_eff=μp σeff, the experiments reveal that an increase of Pf does not necessarily induce an instability even when the effective strength threshold is largely surpassed (e.g., τp_eff=1.3 μpσeff). This result may indicate that the Pf increase did not instantly diffuse throughout the slip zone, but took a finite time to equilibrate with the external imposed pressure increase due to finite permeability. Under our experimental conditions, a significant departure from α=1 is observed provided that the Pf step is shorter than about < 20s. We interpret this delay as indicative of the diffusion time (td), which is related to fluid penetration length l by l = √ κtd-, where κ is the hydraulic diffusivity on the fault plane. We show that a

  17. Mantle-crust interaction at the Blanco Ridge segment of the Blanco Transform Fault Zone: Results from the Blanco Transform Fault OBS Experiment

    NASA Astrophysics Data System (ADS)

    Kuna, V. M.; Nabelek, J.; Braunmiller, J.

    2016-12-01

    We present results of the Blanco Transform OBS Experiment, which consists of the deployment of 55 three-component broadband and short-period ocean bottom seismometers in the vicinity of the Blanco Fault Zone for the period between September 2012 and October 2013. Our research concentrates on the Blanco Ridge, a purely transform segment of the Blanco Fault Zone, that spans over 130 km between the Cascadia and the Gorda pull-apart depressions. Almost 3,000 well-constrained earthquakes were detected and located along the Blanco Ridge by an automatic procedure (using BRTT Antelope) and relocated using a relative location algorithm (hypoDD). The catalog magnitude of completeness is M=2.2 with an overall b value of 1. Earthquakes extend from 0 km to 20 km depth, but cluster predominantly at two depth levels: in the crust (5-7 km) and in the uppermost mantle (12-17 km). Statistical analysis reveals striking differences between crustal and mantle seismicity. The temporal distribution of crustal events follows common patterns given by Omori's law, while most mantle seismicity occurs in spatially tight sequences of unusually short durations lasting 30 minutes or less. These sequences cannot be described by known empirical laws. Moreover, we observe increased seismic activity in the uppermost mantle about 30 days before the largest (M=5.4) earthquake. Two mantle sequences occurred in a small area of 3x3 km about 4 and 2 weeks before the M=5.4 event. In the week leading up to the M=5.4 event we observe a significant downward migration of crustal seismicity, which results in the subsequent nucleation of the main event at the base of the crust. We hypothesize that the highly localized uppermost mantle seismicity is triggered by aseismic slow-slip of the surrounding ductile mantle. We also suggest that the mantle slip loads the crust eventually resulting in relatively large crustal earthquakes.

  18. Experimenting with Sensor Webs Using Earth Observing 1

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2004-01-01

    The New Millennium Program (NMP) Earth Observing 1 ( EO-1) satellite was launched November 21, 2000 as a one year technology validation mission. After an almost flawless first year of operations, EO-1 continued to operate in a test bed d e to validate additional technologies and concepts that will be applicable to future sensor webs. A sensor web is a group of sensors, whether space-based, ground-based or air plane-based which act in a collaborative autonomous manner to produce more value than would otherwise result from the individual observations.

  19. Earth

    2012-01-30

    Behold one of the more detailed images of the Earth yet created. This Blue Marble Earth montage shown above -- created from photographs taken by the Visible/Infrared Imager Radiometer Suite (VIIRS) instrument on board the new Suomi NPP satellite -- shows many stunning details of our home planet. The Suomi NPP satellite was launched last October and renamed last week after Verner Suomi, commonly deemed the father of satellite meteorology. The composite was created from the data collected during four orbits of the robotic satellite taken earlier this month and digitally projected onto the globe. Many features of North America and the Western Hemisphere are particularly visible on a high resolution version of the image. http://photojournal.jpl.nasa.gov/catalog/PIA18033

  20. Soil warming response: field experiments to Earth system models

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Bradford, M.; Wieder, W. R.; Crowther, T. W.

    2017-12-01

    The soil carbon response to climate change is extremely uncertain at the global scale, in part because of the uncertainty in the magnitude of the temperature response. To address this uncertainty we collected data from 48 soil warming manipulations studies and examined the temperature response using two different methods. First, we constructed a mixed effects model and extrapolated the effect of soil warming on soil carbon stocks under anticipated shifts in surface temperature during the 21st century. We saw significant vulnerability of soil carbon stocks, especially in high carbon soils. To place this effect in the context of anticipated changes in carbon inputs and moisture shifts, we applied a one pool decay model with temperature sensitivities to the field data and imposed a post-hoc correction on the Earth system model simulations to integrate the field with the simulated temperature response. We found that there was a slight elevation in the overall soil carbon losses, but that the field uncertainty of the temperature sensitivity parameter was as large as the variation in the among model soil carbon projections. This implies that model-data integration is unlikely to constrain soil carbon simulations and highlights the importance of representing parameter uncertainty in these Earth system models to inform emissions targets.

  1. Livingstone Model-Based Diagnosis of Earth Observing One Infusion Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.

    2004-01-01

    The Earth Observing One satellite, launched in November 2000, is an active earth science observation platform. This paper reports on the progress of an infusion experiment in which the Livingstone 2 Model-Based Diagnostic engine is deployed on Earth Observing One, demonstrating the capability to monitor the nominal operation of the spacecraft under command of an on-board planner, and demonstrating on-board diagnosis of spacecraft failures. Design and development of the experiment, specification and validation of diagnostic scenarios, characterization of performance results and benefits of the model- based approach are presented.

  2. An expert system for fault management assistance on a space sleep experiment

    NASA Technical Reports Server (NTRS)

    Atamer, A.; Delaney, M.; Young, L. R.

    2002-01-01

    The expert system, Principal Investigator-in-a-box, or [PI], was designed to assist astronauts or other operators in performing experiments outside their expertise. Currently, the software helps astronauts calibrate instruments for a Sleep and Respiration Experiment without contact with the investigator on the ground. It flew on the Space Shuttle missions STS-90 and STS-95. [PI] displays electrophysiological signals in real time, alerts astronauts via the indicator lights when a poor signal quality is detected, and advises astronauts how to restore good signal quality. Thirty subjects received training on the sleep instrumentation and the [PI] interface. A beneficial effects of [PI] and training reduced troubleshooting time. [PI] benefited subjects on the most difficult scenarios, even though its lights were not 100% accurate. Further, questionnaires showed that most subjects preferred monitoring waveforms with [PI] assistance rather than monitoring waveforms alone. This study addresses problems of complex troubleshooting and the extended time between training and execution that is common to many human operator situations on earth such as in power plant operation, and marine exploration.

  3. Frictional properties of Alpine Fault gouge in high-velocity shear experiments

    NASA Astrophysics Data System (ADS)

    Morgan, C.; Reches, Z.

    2015-12-01

    The Alpine Fault, New Zealand, is a plate boundary with slip rate of ~ 37 mm/yr, with major historic seismic events. The Deep Fault Drilling Program (DFDP) into the Alpine Fault had two phases in 2011 and 2014, with main objectives of fault-zone sampling and borehole instrumentations. As complementary work to the drilling, we analyze the frictional properties of the Alpine Fault gauge on samples collected at three field exposures (Waikukupa, Cataclasite, and Gaunt) at distances up to 70 km away from DFDP-2. The bulk samples (1-3 kg) were first manually disintegrated without shear, and then sieved to the 250-350 micron fraction. The gouge was sheared in a Confined Rotary Cell (CROC) in the natural, moisture conditions, at slip-velocity range of 0.01 m/s to 0.5 m/s (constant and stepped) with a constant normal stress of 2-3 MPa. Runs included monitoring the CO2 and H2O emission, in addition to the standard mechanical parameters. The preliminary results show an initial friction coefficient ~0.6. Initial slip at low velocities (0.01 m/s) display gentle velocity strengthening, that changed to a drastic weakening (~50%) at velocity of 0.5 m/s. This weakening was associated with intense slip localization along a hard, dark slip surface within the gouge zone. After the establishment of this slip surface, the low friction remains for the following low slip-velocity steps. Future work will include: (1) systematic investigation of the dynamic friction dependence on the slip-velocity and slip-distance; (2) analysis of the relations between friction, mineralogy and the release of CO2/H2O; and (3) application of the experimental results to characterize natural fault behavior.

  4. Big Earth Data: the Film, the Experience, and some Thoughts

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2016-12-01

    Scientists have to get out of the ivory tower and tell society, which ultimately finances them, about their work, their results, and implications, be they good or bad. This is commonly accepted ethics. But how would you "tell society" at large what you are doing? Scientific work typically is difficult to confer to lay people, and finding suitable simplifications and paraphrasings requires considerable effort. Estimating societal implications is dangerous as swimming with sharks, some of which are your own colleagues. Media tend to be not always interested - unless results are particularly spectacular, well, in a press sense. Again, sharks are luring. All this makes informing the public a tedious, time-consuming task which tends to receive not much appreciation in tenure negotiations where indexed publications are the first and foremost measure.As part of the EU funded EarthServer initiative we tried it. Having promised a "video about the project" we found it boring to do another 10 minute repetition from the grant contract and started aiming at a full TV documentary explaining "Big Earth Data" to the interested citizens. It took more than one year to convince a TV producing company and TV stations that this is not another feature about the beauty of nature or catastrophies, but a bout human insight from computer-supported sifting through all those observations and simulations available. After they got the gist they were fully on board and supported financially with a substantial amount. The final 53 minutes "Big Earth Data" movie was broadcast in February 2015 in German and French (English version available from ). Several smaller spin-off features originated around it, such as an uptake of the theme (and material) in a popular German science TV series.Of course, this is but one contribution and cannot be made a continuous activity. In the talk we want to present and discuss the "making of" from a scientist's perspective, highlighting the ups and downs in the

  5. Big Earth Data: the Film, the Experience, and some Thoughts

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Hoenig, H.

    2015-12-01

    Scientists have to get out of the ivory tower and tell society, which ultimately finances them, about their work, their results, and implications, be they good or bad. This is commonly accepted ethics. But how would you "tell society" at large what you are doing? Scientific work typically is difficult to confer to lay people, and finding suitable simplifications and paraphrasings requires considerable effort. Estimating societal implications is dangerous as swimming with sharks, some of which are your own colleagues. Media tend to be not always interested - unless results are particularly spectacular, well, in a press sense. Again, sharks are luring. All this makes informing the public a tedious, time-consuming task which tends to receive not much appreciation in tenure negotiations where indexed publications are the first and foremost measure.As part of the EU funded EarthServer initiative we tried it. Having promised a "video about the project" we found it boring to do another 10 minute repetition from the grant contract and started aiming at a full TV documentary explaining "Big Earth Data" to the interested citizens. It took more than one year to convince a TV producing company and TV stations that this is not another feature about the beauty of nature or catastrophies, but about human insight from computer-supported sifting through all those observations and simulations available. After they got the gist they were fully on board and supported financially with a substantial amount. The final 53 minutes "Big Earth Data" movie was broadcast in February 2015 in German and French (English version available from ). Several smaller spin-off features originated around it, such as an uptake of the theme (and material) in a popular German science TV series.Of course, this is but one contribution and cannot be made a continuous activity. In the talk we want to present and discuss the "making of" from a scientist's perspective, highlighting the ups and downs in the

  6. Permeability evolution associated to creep and episodic slow slip of a fault affecting clay formations: Results from the FS fault activation experiment in Mt Terri (Switzerland).

    NASA Astrophysics Data System (ADS)

    Guglielmi, Y.; Nussbaum, C.; Birkholzer, J. T.; De Barros, L.; Cappa, F.

    2017-12-01

    There is a large spectrum of fault slow rupture processes such as stable creep and slow slip that radiate no or little seismic energy, and which relationships to normal earthquakes and fault permeability variations are enigmatic. Here we present measurements of a fault slow rupture, permeability variation and seismicity induced by fluid-injection in a fault affecting the Opalinus clay (Mt Terri URL, Switzerland) at a depth of 300 m. We observe multiple dilatant slow slip events ( 0.1-to-30 microm/s) associated with factor-of-1000 increase of permeability, and terminated by a magnitude -2.5 main seismic event associated with a swarm of very small magnitude ones. Using fully coupled numerical modeling, we calculate that the short term velocity strengthening behavior observed experimentally at laboratory scale is overcome by longer slip weakening that may be favored by slip induced dilation. Two monitoring points set across the fault allow estimating that, at the onset of the seismicity, the radius of the fault patch invaded by pressurized fluid is 9-to-11m which is in good accordance with a fault instability triggering when the dimensions of the critical slip distance are overcome. We then observe that the long term slip weakening is associated to an exponential permeability increase caused by a cumulated effective normal stress drop of about 3.4MPa which controls the successive slip activation of multiple fracture planes inducing a 0.1MPa shear stress drop in the fault zone. Therefore, our data suggest that the induced earthquake that terminated the rupture sequence may have represented enough dynamic stress release to arrest the fault permeability increase, suggesting the high sensitivity of the slow rupture processes to the structural heterogeneity of the fault zone hydromechanical properties.

  7. Extrapolating subsurface geometry by surface expressions in transpressional strike slip fault, deduced from analogue experiments with settings of rheology and convergence angle

    NASA Astrophysics Data System (ADS)

    Hsieh, Shang Yu; Neubauer, Franz

    2015-04-01

    The internal structure of major strike-slip faults is still poorly understood, particularly how to extrapolate subsurface structures by surface expressions. Series of brittle analogue experiments by Leever et al., 2011 resulted the convergence angle is the most influential factor for surface structures. Further analogue models with different ductile settings allow a better understanding in extrapolating surface structures to the subsurface geometry of strike-slip faults. Fifteen analogue experiments were constructed to represent strike-slip faults in nature in different geological settings. As key parameters investigated in this study include: (a) the angle of convergence, (b) the thickness of brittle layer, (c) the influence of a rheological weak layer within the crust, and (d) influence of a thick and rheologically weak layer at the base of the crust. The experiments are aimed to explain first order structures along major transcurrent strike-slip faults such as the Altyn, Kunlun, San Andrea and Greendale (Darfield earthquake 2010) faults. The preliminary results show that convergence angle significantly influences the overall geometry of the transpressional system with greater convergence angles resulting in wider fault zones and higher elevation. Different positions, densities and viscosities of weak rheological layers have not only different surface expressions but also affect the fault geometry in the subsurface. For instance, rheological weak material in the bottom layer results in stretching when experiment reaches a certain displacement and a buildup of a less segmented, wide positive flower structure. At the surface, a wide fault valley in the middle of the fault zone is the reflection of stretching along the velocity discontinuity at depth. In models with a thin and rheologically weaker layer in the middle of the brittle layer, deformation is distributed over more faults and the geometry of the fault zone below and above the weak zone shows significant

  8. Manifestations of the rotation and gravity of the Earth in high-energy physics experiments

    NASA Astrophysics Data System (ADS)

    Obukhov, Yuri N.; Silenko, Alexander J.; Teryaev, Oleg V.

    2016-08-01

    The inertial (due to rotation) and gravitational fields of the Earth affect the motion of an elementary particle and its spin dynamics. This influence is not negligible and should be taken into account in high-energy physics experiments. Earth's influence is manifest in perturbations in the particle motion, in an additional precession of the spin, and in a change of the constitutive tensor of the Maxwell electrodynamics. Bigger corrections are oscillatory, and their contributions average to zero. Other corrections due to the inhomogeneity of the inertial field are not oscillatory but they are very small and may be important only for the storage ring electric dipole moment experiments. Earth's gravity causes the Newton-like force, the reaction force provided by a focusing system, and additional torques acting on the spin. However, there are no observable indications of the electromagnetic effects due to Earth's gravity.

  9. Study of the effect of cloud inhomogeneity on the earth radiation budget experiment

    NASA Technical Reports Server (NTRS)

    Smith, Phillip J.

    1988-01-01

    The Earth Radiation Budget Experiment (ERBE) is the most recent and probably the most intensive mission designed to gather precise measurements of the Earth's radiation components. The data obtained from ERBE is of great importance for future climatological studies. A statistical study reveals that the ERBE scanner data are highly correlated and that instantaneous measurements corresponding to neighboring pixels contain almost the same information. Analyzing only a fraction of the data set when sampling is suggested and applications of this strategy are given in the calculation of the albedo of the Earth and of the cloud-forcing over ocean.

  10. Review of Low Earth Orbital (LEO) flight experiments

    NASA Technical Reports Server (NTRS)

    Leger, L.; Santosmason, B.; Visentine, J.; Kuminecz, J.

    1987-01-01

    The atomic oxygen flux exposure experiments flown on Space Shuttle flights STS-5 and STS-8 are described along with the results of measurements made on hardware returned from the Solar Maximum repair mission (Space Shuttle flight 41-C). In general, these experiments have essentially provided for passive exposure of samples to oxygen fluences of approximately 1 to 3.5 x 10(20) atoms/sq cm. Atmospheric density is used to derive fluence and is dependent on solar activity, which has been on the decline side of the 11-year cycle. Thus, relatively low flight altitudes of less than 300 km were used to acquire these exposures. After exposure, the samples were analyzed using various methods ranging from mass loss to extensive scanning electron microscopy and surface analysis techniques. Results are summarized and implications for the space station are discussed.

  11. Earth Radiation Budget Experiment (ERBE) Data Sets for Global Environment and Climate Change Studies

    NASA Technical Reports Server (NTRS)

    Bess, T. Dale; Carlson, Ann B.; Denn, Fredrick M.

    1997-01-01

    For a number of years there has been considerable interest in the earth's radiation budget (ERB) or energy balance, and entails making the best measurements possible of absorbed solar radiation, reflected shortwave radiation (RSW), thermal outgoing longwave radiation (OLR), and net radiation. ERB data are fundamental to the development of realistic climate models and studying natural and anthropogenic perturbations of the climate. Much of the interest and investigations in the earth's energy balance predated the age of earth-orbiting satellites (Hunt et al., 1986). Beginning in the mid 1960's earth-orbiting satellites began to play an important role in making measurements of the earth's radiation flux although much effort had gone into measuring ERB parameters prior to 1960 (House et al., 1986). Beginning in 1974 and extending until the present time, three different satellite experiments (not all operating at the same time) have been making radiation budget measurements almost continually in time. Two of the experiments were totally dedicated to making radiation budget measurements of the earth, and the other experiment flown on NOAA sun-synchronous AVHRR weather satellites produced radiation budget parameters as a by-product. The heat budget data from the AVHRR satellites began collecting data in June 1974 and have operated almost continuously for 23 years producing valuable data for long term climate monitoring.

  12. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  13. Influence of Fault Surface Heterogeneity on Apparent Frictional Strength, Slip Mode and Rupture Mode: Insights from Meter-Scale Rock Friction Experiments

    NASA Astrophysics Data System (ADS)

    Xu, S.; Fukuyama, E.; Yamashita, F.; Mizoguchi, K.; Takizawa, S.; Kawakata, H.

    2016-12-01

    Influence of fault zone heterogeneity on the behavior of fault motion has been studied in many aspects, such as strain partitioning, heat generation, slip mode, rupture mode, and effective friction law. However, a multi-scale investigation of fault behavior due to heterogeneity was difficult in nature, because of the limited access to natural fault zones at the seismogenic depth and the lack of in situ high-resolution observations. To overcome these difficulties, we study the behavior of a meter-scale synthetic fault made of Indian metagabbro during laboratory direct shear experiments, utilizing high-density arrays of strain gauges mounted close to the fault. We focus on two target experiments that are loaded under the same normal stress of 6.7 MPa and loading rate of 0.01 mm/s, but with different initial surface conditions. To change the surface condition, we applied a fast loading experiment under a rate of 1 mm/s between the two target experiments. It turned out the fast loading activated many foreshocks before the mainshock and caused a roaming of the mainshock nucleation site. These features were closely related to the re-distribution of the real contact area and surface wear, which together reflected a more heterogeneous state of the surface condition. During the first target experiment before the fast loading, the synthetic fault moved in a classic stick-slip fashion and the typical rupture mode was subshear within the range of the fault length. However, during the second target experiment, the synthetic fault inherited the heterogeneous features generated from the previous fast loading, showing a macroscopic creep-like behavior that actually consisted of many small stick-slip events. The apparent frictional strength increased while the recurrence interval and the stress drop decreased, compared to the levels seen in the first target experiment. The rupture mode became more complicated; supershear phases sometimes emerged but may only exist transiently

  14. Machine Learning of Fault Friction

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  15. Revealing the Earth's mantle from the tallest mountains using the Jinping Neutrino Experiment.

    PubMed

    Šrámek, Ondřej; Roskovec, Bedřich; Wipperfurth, Scott A; Xi, Yufei; McDonough, William F

    2016-09-09

    The Earth's engine is driven by unknown proportions of primordial energy and heat produced in radioactive decay. Unfortunately, competing models of Earth's composition reveal an order of magnitude uncertainty in the amount of radiogenic power driving mantle dynamics. Recent measurements of the Earth's flux of geoneutrinos, electron antineutrinos from terrestrial natural radioactivity, reveal the amount of uranium and thorium in the Earth and set limits on the residual proportion of primordial energy. Comparison of the flux measured at large underground neutrino experiments with geologically informed predictions of geoneutrino emission from the crust provide the critical test needed to define the mantle's radiogenic power. Measurement at an oceanic location, distant from nuclear reactors and continental crust, would best reveal the mantle flux, however, no such experiment is anticipated. We predict the geoneutrino flux at the site of the Jinping Neutrino Experiment (Sichuan, China). Within 8 years, the combination of existing data and measurements from soon to come experiments, including Jinping, will exclude end-member models at the 1σ level, define the mantle's radiogenic contribution to the surface heat loss, set limits on the composition of the silicate Earth, and provide significant parameter bounds for models defining the mode of mantle convection.

  16. Modeling and characterization of the Earth Radiation Budget Experiment (ERBE) nonscanner and scanner sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Pandey, Dhirendra K.; Taylor, Deborah B.

    1989-01-01

    The Earth Radiation Budget Experiment (ERBE) is making high-absolute-accuracy measurements of the reflected solar and Earth-emitted radiation as well as the incoming solar radiation from three satellites: ERBS, NOAA-9, and NOAA-10. Each satellite has four Earth-looking nonscanning radiometers and three scanning radiometers. A fifth nonscanner, the solar monitor, measures the incoming solar radiation. The development of the ERBE sensor characterization procedures are described using the calibration data for each of the Earth-looking nonscanners and scanners. Sensor models for the ERBE radiometers are developed including the radiative exchange, conductive heat flow, and electronics processing for transient and steady state conditions. The steady state models are used to interpret the sensor outputs, resulting in the data reduction algorithms for the ERBE instruments. Both ground calibration and flight calibration procedures are treated and analyzed. The ground and flight calibration coefficients for the data reduction algorithms are presented.

  17. Design of experiment for earth rotation and baseline parameter determination from very long baseline interferometry

    NASA Technical Reports Server (NTRS)

    Dermanis, A.

    1977-01-01

    The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.

  18. Passive exposure of Earth radiation budget experiment components. LDEF experiment AO-147: Post-flight examinations and tests

    NASA Technical Reports Server (NTRS)

    Hickey, John R.

    1992-01-01

    The flight spare sensors of the Earth Radiation Budget (ERB) experiment of the Nimbus 6 and 7 missions were flown aboard the LDEF. The preliminary post retrieval examination and test results are presented here for the sensor windows and filters, the thermopile sensors and a cavity radiometer.

  19. Real World Experience With Ion Implant Fault Detection at Freescale Semiconductor

    NASA Astrophysics Data System (ADS)

    Sing, David C.; Breeden, Terry; Fakhreddine, Hassan; Gladwin, Steven; Locke, Jason; McHugh, Jim; Rendon, Michael

    2006-11-01

    The Freescale automatic fault detection and classification (FDC) system has logged data from over 3.5 million implants in the past two years. The Freescale FDC system is a low cost system which collects summary implant statistics at the conclusion of each implant run. The data is collected by either downloading implant data log files from the implant tool workstation, or by exporting summary implant statistics through the tool's automation interface. Compared to the traditional FDC systems which gather trace data from sensors on the tool as the implant proceeds, the Freescale FDC system cannot prevent scrap when a fault initially occurs, since the data is collected after the implant concludes. However, the system can prevent catastrophic scrap events due to faults which are not detected for days or weeks, leading to the loss of hundreds or thousands of wafers. At the Freescale ATMC facility, the practical applications of the FD system fall into two categories: PM trigger rules which monitor tool signals such as ion gauges and charge control signals, and scrap prevention rules which are designed to detect specific failure modes that have been correlated to yield loss and scrap. PM trigger rules are designed to detect shifts in tool signals which indicate normal aging of tool systems. For example, charging parameters gradually shift as flood gun assemblies age, and when charge control rules start to fail a flood gun PM is performed. Scrap prevention rules are deployed to detect events such as particle bursts and excessive beam noise, events which have been correlated to yield loss. The FDC system does have tool log-down capability, and scrap prevention rules often use this capability to automatically log the tool into a maintenance state while simultaneously paging the sustaining technician for data review and disposition of the affected product.

  20. Slip on Ridge Transform Faults: Insights From Earthquakes and Laboratory Experiments

    DTIC Science & Technology

    2005-06-01

    the volume of continental crust [Turcotte, release reported by the CMT catalog for each RTF. The1986; Aviles et al., 1987; King et al., 1988; Hirata...faults, Teconophyslcs, 118, 313-327. 30(12), 1618, doi:10.1029/2002GL016454. King , G. C. P., R. S. Stein, and J. B. Rundle (1988), The growth of Fnrncis...with temperatures of T < 600’C. Mylonites collected from the Shaka fracture zone on the South West Indian Ridge provide additional evidence for

  1. Thermal and orbital analysis of Earth monitoring Sun-synchronous space experiments

    NASA Technical Reports Server (NTRS)

    Killough, Brian D.

    1990-01-01

    The fundamentals of an Earth monitoring Sun-synchronous orbit are presented. A Sun-synchronous Orbit Analysis Program (SOAP) was developed to calculate orbital parameters for an entire year. The output from this program provides the required input data for the TRASYS thermal radiation computer code, which in turn computes the infrared, solar and Earth albedo heat fluxes incident on a space experiment. Direct incident heat fluxes can be used as input to a generalized thermal analyzer program to size radiators and predict instrument operating temperatures. The SOAP computer code and its application to the thermal analysis methodology presented, should prove useful to the thermal engineer during the design phases of Earth monitoring Sun-synchronous space experiments.

  2. Skylab S191 visible-infrared spectrometer. [in Earth Resources Experiment Package

    NASA Technical Reports Server (NTRS)

    Barnett, T. L.; Juday, R. D.

    1977-01-01

    The paper describes the S191 visible-infrared spectrometer of the Skylab Earth Resources Experiment Package - a manually pointed two-channel instrument operating in the reflective (0.4-2.5 micron) and thermal emissive (6-15 micron) regions. A sensor description is provided and attention is given to data quality in the short wavelength and thermal infrared regions.

  3. Data analysis and software support for the Earth radiation budget experiment

    NASA Technical Reports Server (NTRS)

    Edmonds, W.; Natarajan, S.

    1987-01-01

    Computer programming and data analysis efforts were performed in support of the Earth Radiation Budget Experiment (ERBE) at NASA/Langley. A brief description of the ERBE followed by sections describing software development and data analysis for both prelaunch and postlaunch instrument data are presented.

  4. An Experience of Science Theatre to Introduce Earth Interior and Natural Hazards to Children

    ERIC Educational Resources Information Center

    Musacchio, Gemma; Lanza, Tiziana; D'Addezio, Giuliana

    2015-01-01

    The present paper describes an experience of science theatre addressed to children of primary and secondary school, with the main purpose of making them acquainted with a topic, the interior of the Earth, largely underestimated in compulsory school curricula worldwide. A not less important task was to encourage a positive attitude towards natural…

  5. Through-the-earth communication: Experiment results from Billie Mine and Mississippi Chemical Mine

    NASA Astrophysics Data System (ADS)

    Buettner, H. M.; Didwall, E. M.; Bukofzer, D. C.

    1988-06-01

    As part of the Lawrence Livermore National Laboratory (LLNL) effort to evaluate Through-the-Earth Communication (TEC) as an option for military communication systems, experiments were conducted involving transmission, reception, and performance monitoring of digital electromagnetic communication signals propagating through the earth. The two experiments reported on here not only demonstrated that TEC is useful for transmissions at digital rates above a few bits per second, but also provided data on performance parameters with which to evaluate TEC in various military applications. The most important aspect of these experiments is that the bit error rate (BER) is measured rather than just estimated from purely analytic developments. By measuring this important parameter, not only has more credibility been lent to the proof of concept goals of the experiment, but also a means for judging the effects of assumptions in BER theoretical models has been provided.

  6. Inflated concepts for the earth science geostationary platform and an associated flight experiment

    NASA Technical Reports Server (NTRS)

    Friese, G.

    1992-01-01

    Large parabolic reflectors and solar concentrators are of great interest for microwave transmission, solar powered rockets, and Earth observations. Collector subsystems have been under slow development for a decade. Inflated paraboloids have a great weight and package volume advantage over mechanically erected systems and, therefore, have been receiving greater attention recently. The objective of this program was to produce a 'conceptual definition of an experiment to assess in-space structural damping characteristics and effects of the space meteoroid environment upon structural integrity and service life of large inflatable structures.' The flight experiment was to have been based upon an inflated solar concentration, but much of that was being done on other programs. To avoid redundancy, the Earth Science Geostationary Platform (ESGP) was selected as a focus mission for the experiment. Three major areas were studied: the ESGP reflector configuration; flight experiment; and meteoroids.

  7. Urban fifth graders' connections-making between formal earth science content and their lived experiences

    NASA Astrophysics Data System (ADS)

    Brkich, Katie Lynn

    2014-03-01

    Earth science education, as it is traditionally taught, involves presenting concepts such as weathering, erosion, and deposition using relatively well-known examples—the Grand Canyon, beach erosion, and others. However, these examples—which resonate well with middle- and upper-class students—ill-serve students of poverty attending urban schools who may have never traveled farther from home than the corner store. In this paper, I explore the use of a place-based educational framework in teaching earth science concepts to urban fifth graders and explore the connections they make between formal earth science content and their lived experiences using participant-driven photo elicitation techniques. I argue that students are able to gain a sounder understanding of earth science concepts when they are able to make direct observations between the content and their lived experiences and that when such direct observations are impossible they make analogies of appearance, structure, and response to make sense of the content. I discuss additionally the importance of expanding earth science instruction to include man-made materials, as these materials are excluded traditionally from the curriculum yet are most immediately available to urban students for examination.

  8. On-orbit characterizations of Earth Radiation Budget Experiment broadband shortwave active cavity radiometer sensor responses

    NASA Astrophysics Data System (ADS)

    Lee, Robert B., III; Wilson, Robert S.; Smith, G. Louis; Bush, Kathryn A.; Thomas, Susan; Pandey, Dhirendra K.; Paden, Jack

    2004-12-01

    The NASA Earth Radiation Budget Experiment (ERBE) missions were designed to monitor long-term changes in the earth radiation budget components which may cause climate changes. During the October 1984 through September 2004 period, the NASA Earth Radiation Budget Satellite (ERBS)/ERBE nonscanning active cavity radiometers (ACR) were used to monitor long-term changes in the earth radiation budget components of the incoming total solar irradiance (TSI), earth-reflected TSI, and earth-emitted outgoing longwave radiation (OLR). The earth-reflected total solar irradiances were measured using broadband shortwave fused, waterless quartz (Suprasil) filters and ACR"s that were covered with a black paint absorbing surface. Using on-board calibration systems, 1984 through 1999, long-term ERBS/ERBE ACR sensor response changes were determined from direct observations of the incoming TSI in the 0.2-5 micrometer shortwave broadband spectral region. During the October 1984 through September 1999 period, the ERBS shortwave sensor responses were found to decrease as much as 8.8% when the quartz filter transmittances decreased due to direct exposure to TSI. On October 6, 1999, the on-board ERBS calibration systems failed. To estimate the 1999-2004, ERBS sensor response changes, the 1984-1997 NOAA-9, and 1986-1995 NOAA-10 Spacecraft ERBE ACR responses were used to characterize response changes as a function of exposure time. The NOAA-9 and NOAA-10 ACR responses decreased as much as 10% due to higher integrated TSI exposure times. In this paper, for each of the ERBS, NOAA-9, and NOAA-10 Spacecraft platforms, the solar calibrations of the ERBE sensor responses are described as well as the derived ERBE sensor response changes as a function of TSI exposure time. For the 1984-2003 ERBS data sets, it is estimated that the calibrated ERBE earth-reflected TSI measurements have precisions approaching 0.2 Watts-per-squared-meter at satellite altitudes.

  9. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  10. Radiation Information for Designing and Interpreting Biological Experiments Onboard Missions Beyond Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Straume, T.; Slaba, T.; Bhattacharya, S.; Braby, L. A.

    2017-01-01

    There is growing interest in flying biological experiments beyond low-Earth orbit (LEO) to measure biological responses potentially relevant to those expected during a human mission to Mars. Such experiments could be payloads onboard precursor missions, including unmanned private-public partnerships, as well as small low-cost spacecraft (satellites) designed specifically for biosentinel type missions. Designing such experiments requires knowledge of the radiation environment and its interactions with both the spacecraft and the experimental payload. Information is provided here that is useful for designing such experiments.

  11. Magnetic shielding in a low temperature torsion pendulum experiment. [superconducting cylinders for attenuation earth field

    NASA Technical Reports Server (NTRS)

    Phillips, P. R.

    1979-01-01

    A new type of ether drift experiment searches for anomalous torques on a permanent magnet. A torsion pendulum is used at liquid helium temperature, so that superconducting cylinders can be used to shield magnetic fields. Lead shields attenuate the earth's field, while Nb-Sn shields fastened to the pendulum contain the fields of the magnet. The paper describes the technique by which the earth's field can be reduced below 0.0001 G while simultaneously the moment of the magnet can be reduced by a factor 7 x 10 to the 4th.

  12. Radiative Energy Budget Studies Using Observations from the Earth Radiation Budget Experiment (ERBE)

    NASA Technical Reports Server (NTRS)

    Ackerman, Steven A.; Frey, R.; Shie, M.; Olson, R.; Collimore, C.; Friedman, M.

    1997-01-01

    Our research activities under this NASA grant have focused on two broad topics associated with the Earth Radiation Budget Experiment (ERBE): (1) the role of clouds and the surface in modifying the radiative balance; and (2) the spatial and temporal variability of the earth's radiation budget. Each of these broad topics is discussed separately in the text that follows. The major points of the thesis are summarized in section 3 of this report. Other dissertation focuses on deriving the radiation budget over the TOGA COARE region.

  13. Ste. Genevieve Fault Zone, Missouri and Illinois. Final report

    SciT

    Nelson, W.J.; Lumm, D.K.

    1985-07-01

    The Ste. Genevieve Fault Zone is a major structural feature which strikes NW-SE for about 190 km on the NE flank of the Ozark Dome. There is up to 900 m of vertical displacement on high angle normal and reverse faults in the fault zone. At both ends the Ste. Genevieve Fault Zone dies out into a monocline. Two periods of faulting occurred. The first was in late Middle Devonian time and the second from latest Mississippian through early Pennsylvanian time, with possible minor post-Pennsylvanian movement. No evidence was found to support the hypothesis that the Ste. Genevieve Fault Zonemore » is part of a northwestward extension of the late Precambrian-early Cambrian Reelfoot Rift. The magnetic and gravity anomalies cited in support of the ''St. Louis arm'' of the Reelfoot Rift possible reflect deep crystal features underlying and older than the volcanic terrain of the St. Francois Mountains (1.2 to 1.5 billion years old). In regard to neotectonics no displacements of Quaternary sediments have been detected, but small earthquakes occur from time to time along the Ste. Genevieve Fault Zone. Many faults in the zone appear capable of slipping under the current stress regime of east-northeast to west-southwest horizontal compression. We conclude that the zone may continue to experience small earth movements, but catastrophic quakes similar to those at New Madrid in 1811-12 are unlikely. 32 figs., 1 tab.« less

  14. Earth orbital experiment program and requirements study, volume 1, sections 1 - 6

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A reference manual for planners of manned earth-orbital research activity is presented. The manual serves as a systems approach to experiment and mission planning based on an integrated consideration of candidate research programs and the appropriate vehicle, mission, and technology development requirements. Long range goals and objectives for NASA activities during the 1970 to 1980 time period are analyzed. The useful and proper roles of manned and automated spacecraft for implementing NASA experiments are described. An integrated consideration of NASA long range goals and objectives, the system and mission requirements, and the alternative implementation plans are developed. Specific areas of investigation are: (1) manned space flight requirements, (2) space biology, (3) spaceborne astronomy, (4) space communications and navigation, (5) earth observation, (6) supporting technology development requirements, (7) data management system matrices, (8) instrumentation matrices, and (9) biotechnology laboratory experiments.

  15. Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.

    2017-12-01

    Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.

  16. An Invitation to Kitchen Earth Sciences, an Example of MISO Soup Convection Experiment in Classroom

    NASA Astrophysics Data System (ADS)

    Kurita, K.; Kumagai, I.; Davaille, A.

    2008-12-01

    In recent frontiers of earth sciences such as computer simulations and large-scale observations/experiments involved researchers are usually remote from the targets and feel difficulty in having a sense of touching the phenomena in hands. This results in losing sympathy for natural phenomena particularly among young researchers, which we consider a serious problem. We believe the analog experiments such as the subjects of "kitchen earth sciences" proposed here can be a remedy for this. Analog experiments have been used as an important tool in various research fields of earth science, particularly in the fields of developing new ideas. The experiment by H. Ramberg by using silicone pate is famous for guiding concept of the mantle dynamics. The term, "analog" means something not directly related to the target of the research but in analogical sense parallel comparison is possible. The advantages of the analog experiments however seem to have been overwhelmed by rapid progresses of computer simulations. Although we still believe in the present-day meaning, recently we are recognizing another aspect of its significance. The essence of "kitchen earth science" as an analog experiment is to provide experimental setups and materials easily from the kitchen, by which everyone can start experiments and participate in the discussion without special preparations because of our daily-experienced matter. Here we will show one such example which can be used as a heuristic subject in the classrooms at introductory level of earth science as well as in lunch time break of advanced researchers. In heated miso soup the fluid motion can be easily traced by the motion of miso "particles". At highly heated state immiscible part of miso convects with aqueous fluid. At intermediate heating the miso part precipitates to form a sediment layer at the bottom. This layered structure is destroyed regularly by the instability caused by accumulated heat in the miso layer as a bursting. By showing

  17. Earth at Rest. Aesthetic Experience and Students' Grounding in Science Education

    NASA Astrophysics Data System (ADS)

    Østergaard, Edvin

    2017-07-01

    Focus of this article is the current situation characterized by students' de-rootedness and possible measures to improve the situation within the frame of education for sustainable development. My main line of argument is that science teachers can practice teaching in such a way that students are brought in deeper contact to the environment. I discuss efforts to promote aesthetic experience in science class and in science teacher education. Within a wide range of definitions, my main understanding of aesthetic experience is that of pre-conceptual experience, relational to the environment and incorporated in students' embodied knowledge. I ground the idea of Earth at rest in Husserl's phenomenological philosophy and Heidegger's notion of science' deprivation of the world. A critique of the ontological reversal leads to an ontological re-reversal that implies giving lifeworld experience back its value and rooting scientific concepts in students' everyday lives. Six aspects of facilitating grounding in sustainability-oriented science teaching and teacher education are highlighted and discussed: students' everyday knowledge and experience, aesthetic experience and grounding, fostering aesthetic sensibility, cross-curricular integration with art, ontological and epistemological aspects, and belongingness and (re-)connection to Earth. I conclude that both science students and student-teachers need to practice their sense of caring and belonging, as well as refining their sensibility towards the world. With an intension of educating for a sustainable development, there is an urgent need for a critical discussion in science education when it comes to engaging learners for a sustainable future.

  18. Passive exposure of Earth radiation budget experiment components LDEF experiment AO-147: Post-flight examinations and tests

    NASA Technical Reports Server (NTRS)

    Hickey, John R.

    1991-01-01

    The Passive Exposure of Earth Radiation Budget Experiment Components (PEERBEC) experiment of the Long Duration Exposure Facility (LDEF) mission was composed of sensors and components associated with the measurement of the earth radiation budget (ERB) from satellites. These components included the flight spare sensors from the ERB experiment which operated on Nimbus 6 and 7 satellites. The experiment components and materials as well as the pertinent background and ancillary information necessary for the understanding of the intended mission and the results are described. The extent and timing of the LDEF mission brought the exposure from solar minimum between cycles 21 and 22 through the solar maximum of cycle 22. The orbital decay, coupled with the events of solar maximum, caused the LDEF to be exposed to a broader range of space environmental effects than were anticipated. The mission spanned almost six years concurrent with the 12 year (to date) Nimbus 7 operations. Preliminary information is presented on the following: (1) the changes in transmittance experienced by the interference filters; (2) the results of retesting of the thermopile sensors, which appear to be relatively unaffected by the exposure; and (3) the results of the recalibration of the APEX cavity radiometer. The degradation and recovery of the filters of the Nimbus 7 ERB are also discussed relative to the apparent atomic oxygen cleaning which also applies to the LDEF.

  19. The Mini-Earth facility and present status of habitation experiment program.

    PubMed

    Nitta, Keiji

    2005-01-01

    The history of construction of the CEEF (the Mini-Earth), the configuration and scale of the CEEF are initially described. The effective usable areas in plant cultivation and animal holding and habitation modules and the accommodation equipment installed in each module are also explained. Mechanisms of the material circulation systems belonging to each module and subsystems in each material circulation system are introduced. Finally the results of pre-habitation experiments conducted until the year 2004 for clarifying the requirements in order to promote final closed habitation experiments are shown. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  20. Earth's core-mantle boundary - Results of experiments at high pressures and temperatures

    NASA Technical Reports Server (NTRS)

    Knittle, Elise; Jeanloz, Raymond

    1991-01-01

    Laboratory experiments document that liquid iron reacts chemically with silicates at high pressures (above 2.4 x 10 to the 10th Pa) and temperatures. In particular, (Mg,Fe)SiO3 perovskite, the most abundant mineral of earth's lower mantle, is expected to react with liquid iron to produce metallic alloys (FeO and FeSi) and nonmetallic silicates (SiO2 stishovite and MgSiO3 perovskite) at the pressures of the core-mantle boundary, 14 x 10 to the 10th Pa. The experimental observations, in conjunction with seismological data, suggest that the lowermost 200 to 300 km of earth's mantle, the D-double-prime layer, may be an extremely heterogeneous region as a result of chemical reactions between the silicate mantle and the liquid iron alloy of earth's core. The combined thermal-chemical-electrical boundary layer resulting from such reactions offers a plausible explanation for the complex behavior of seismic waves near the core-mantle boundary and could influence earth's magnetic field observed at the surface.

  1. Impact into the earth's ocean floor - Preliminary experiments, a planetary model, and possibilities for detection

    NASA Technical Reports Server (NTRS)

    Mckinnon, W. B.

    1982-01-01

    Impact processes and plate tectonics are invoked in an experimental study of craters larger than 100 km in diameter on the ocean floor. Although the results obtained from 22-caliber (383 m/sec) ammunition experiments using dense, saturated sand as a target medium cannot be directly scaled to large events, the phenomenology exhibited is that expected of actual craters on the ocean floor: steep, mixed ejecta plume, gravitational adjustment of the crater to form a shallow basin, and extensive reworking of the ejecta, rim, and floor materials by violent collapse of the transient water cavity. Excavation into the mantle is predicted, although asthenospheric influence on outer ring formation is not. The clearest geophysical signature of such a crater is not topography; detection should instead be based on gravity and geoid anomalies due to uplift of the Moho, magnetic anomalies, and seismic resolution of the Moho uplift and crater formation fault planes.

  2. An investigation of ESSA 7 radiation data for use in long-term earth energy experiments, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    House, F. B.

    1974-01-01

    The results are presented of an investigation of ESSA 7 satellite radiation data for use in long-term earth energy experiments. Satellite systems for performing long-term earth radiation balance measurements over geographical areas, hemispheres, and the entire earth for periods of 10 to 30 years are examined. The ESSA 7 satellite employed plate and cone radiometers to measure earth albedo and emitted radiation. Each instrument had a black and white radiometer which discriminated the components of albedo and emitted radiation. Earth measurements were made continuously from ESSA 7 for ten months. The ESSA 7 raw data is processed to a point where it can be further analyzed for: (1) development of long-term earth energy experiments; and (2) document climate trends.

  3. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  4. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike

  5. The ISIS Project: Real Experience with a Fault Tolerant Programming System

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert

    1990-01-01

    The ISIS project has developed a distributed programming toolkit and a collection of higher level applications based on these tools. ISIS is now in use at more than 300 locations world-wise. The lessons (and surprises) gained from this experience with the real world are discussed.

  6. Challenges and Opportunities for Developing Capacity in Earth Observations for Agricultural Monitoring: The GEOGLAM Experience

    NASA Astrophysics Data System (ADS)

    Whitcraft, A. K.; Di Bella, C. M.; Becker Reshef, I.; Deshayes, M.; Justice, C. O.

    2015-12-01

    Since 2011, the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM) Initiative has been working to strengthen the international community's capacity to use Earth observation (EO) data to derive timely, accurate, and transparent information on agriculture, with the goals of reducing market volatility and promoting food security. GEOGLAM aims to develop capacity for EO-based agricultural monitoring at multiple scales, from national to regional to global. This is accomplished through training workshops, developing and transferring of best-practices, establishing networks of broad and sustainable institutional support, and designing or adapting tools and methodologies to fit localized contexts. Over the past four years, capacity development activities in the context of GEOGLAM have spanned all agriculture-containing continents, with much more work to be done, particularly in the domains of promoting access to large, computationally-costly datasets. This talk will detail GEOGLAM's experiences, challenges, and opportunities surrounding building international collaboration, ensuring institutional buy-in, and developing sustainable programs.

  7. Implications for Core Formation of the Earth from High Pressure-Temperature Au Partitioning Experiments

    NASA Technical Reports Server (NTRS)

    Danielson, L. R.; Sharp, T. G.; Hervig, R. L.

    2005-01-01

    Siderophile elements in the Earth.s mantle are depleted relative to chondrites. This is most pronounced for the highly siderophile elements (HSEs), which are approximately 400x lower than chondrites. Also remarkable is the relative chondritic abundances of the HSEs. This signature has been interpreted as representing their sequestration into an iron-rich core during the separation of metal from silicate liquids early in the Earth's history, followed by a late addition of chondritic material. Alternative efforts to explain this trace element signature have centered on element partitioning experiments at varying pressures, temperatures, and compositions (P-T-X). However, first results from experiments conducted at 1 bar did not match the observed mantle abundances, which motivated the model described above, a "late veneer" of chondritic material deposited on the earth and mixed into the upper mantle. Alternatively, the mantle trace element signature could be the result of equilibrium partitioning between metal and silicate in the deep mantle, under P-T-X conditions which are not yet completely identified. An earlier model determined that equilibrium between metal and silicate liquids could occur at a depth of approximately 700 km, 27(plus or minus 6) GPa and approximately 2000 (plus or minus 200) C, based on an extrapolation of partitioning data for a variety of moderately siderophile elements obtained at lower pressures and temperatures. Based on Ni-Co partitioning, the magma ocean may have been as deep as 1450 km. At present, only a small range of possible P-T-X trace element partitioning conditions has been explored, necessitating large extrapolations from experimental to mantle conditions for tests of equilibrium models. Our primary objective was to reduce or remove the additional uncertainty introduced by extrapolation by testing the equilibrium core formation hypothesis at P-T-X conditions appropriate to the mantle.

  8. Determining Appropriate Coupling between User Experiences and Earth Science Data Services

    NASA Astrophysics Data System (ADS)

    Moghaddam-Taaheri, E.; Pilone, D.; Newman, D. J.; Mitchell, A. E.; Goff, T. D.; Baynes, K.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. Reverb exposes ECHO's capabilities through an interactive, Web 2.0 application designed around searching for Earth Science data and downloading or ordering data of interest. ECHO and Reverb have supported the concept of Earth Science data services for several years but only for discovery. Invocation of these services was not a primary capability of the user experience. As more and more Earth Science data moves online and away from the concept of data ordering, progress has been made in making on demand services available for directly accessed data. These concepts have existed through access mechanisms such as OPeNDAP but are proliferating to accommodate a wider variety of services and service providers. Recently, the EOSDIS Service Interface (ESI) was defined and integrated into the ECS system. The ESI allows data providers to expose a wide variety of service capabilities including reprojection, reformatting, spatial and band subsetting, and resampling. ECHO and Reverb were tasked with making these services available to end-users in a meaningful and usable way that integrated into its existing search and ordering workflow. This presentation discusses the challenges associated with exposing disparate service capabilities while presenting a meaningful and cohesive user experience. Specifically, we'll discuss: - Benefits and challenges of tightly coupling the user interface with underlying services - Approaches to generic service descriptions - Approaches to dynamic user interfaces that better describe service capabilities while minimizing application coupling - Challenges associated with traditional WSDL / UDDI style service

  9. Earthdata 3.0: A Unified Experience and Platform for Earth Science Discovery

    NASA Astrophysics Data System (ADS)

    Plofchan, P.; McLaughlin, B. D.

    2015-12-01

    NASA's EOSDIS (Earth Observing System Data and Information System) as a multitude of websites and applications focused on serving the Earth Science community's extensive data needs. With no central user interface, theme, or mechanism for accessing that data, interrelated systems are confusing and potentially disruptive in users' searches for EOSDIS data holdings. In an effort to bring consistency across these systems, an effort was undertaken to develop Earthdata 3.0: a complete information architecture overhaul of the Earthdata website, a significant update to the Earthdata user experience and user interface, and an increased focus on searching across EOSDIS data holdings, including those housed and made available through DAAC websites. As part of this effort, and in a desire to unify the user experience across related websites, the Earthdata User Interface (EUI) was developed. The EUI is a collection of responsive design components and layouts geared toward creating websites and applications within the Earthdata ecosystem. Each component and layout has been designed specifically for Earth science-related projects which eliminates some of the complexities of building a website or application from the ground up. Its adoption will ensure both consistent markup and a unified look and feel for end users, thereby increasing usability and accessibility. Additionally, through the user of a Google Search Appliance, custom Clojure code, and in cooperation with DAACs, Earthdata 3.0 presents a variety of search results upon a user's keyword(s) entry. These results are not just textual links, but also direct links to downloadable datasets, visualizations of datasets and collections of data, and related articles and videos for further research. The end result of the development of the EUI and the enhanced multi-response type search is a consistent and usable platform for Earth scientists and users to navigate and locate data to further their research.

  10. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  11. Open-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Koppen, Daniel M.

    1997-01-01

    During the third quarter of 1996, the Closed-Loop Systems Laboratory was established at the NASA Langley Research Center (LaRC) to study the effects of High Intensity Radiated Fields on complex avionic systems and control system components. This new facility provided a link and expanded upon the existing capabilities of the High Intensity Radiated Fields Laboratory at LaRC that were constructed and certified during 1995-96. The scope of the Closed-Loop Systems Laboratory is to place highly integrated avionics instrumentation into a high intensity radiated field environment, interface the avionics to a real-time flight simulation that incorporates aircraft dynamics, engines, sensors, actuators and atmospheric turbulence, and collect, analyze, and model aircraft performance. This paper describes the layout and functionality of the Closed-Loop Systems Laboratory, and the open-loop calibration experiments that led up to the commencement of closed-loop real-time flight experiments.

  12. Comparison of the measured and predicted response of the Earth Radiation Budget Experiment active cavity radiometer during solar observations

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Tira, N. E.; Lee, Robert B., III; Keynton, R. J.

    1989-01-01

    The Earth Radiation Budget Experiment consists of an array of radiometric instruments placed in earth orbit by the National Aeronautics and Space Administration to monitor the longwave and visible components of the earth's radiation budget. Presented is a dynamic electrothermal model of the active cavity radiometer used to measure the earth's total radiative exitance. Radiative exchange is modeled using the Monte Carlo method and transient conduction is treated using the finite element method. Also included is the feedback circuit which controls electrical substitution heating of the cavity. The model is shown to accurately predict the dynamic response of the instrument during solar calibration.

  13. A purely local experiment - Poynting and the mean density of the Earth

    NASA Astrophysics Data System (ADS)

    Falconer, Isobel

    1999-06-01

    These days John Henry Poynting is best known for his association with the Poynting vector, which describes the flow of energy in an electromagnetic field, and little is known of his life or work. Yet in the 1890s he caught the popular imagination as `the man who weighed the Earth'. His experiment, using a novel method with a common balance, was part of a heroic tradition, gained him Cambridge University's Adams Prize, and set new standards of precision. Yet in performing this experiment, Poynting seemed to step outside the traditions of late nineteenth century Cambridge University where he was educated, and it is probably significant that he was a Unitarian throughout his life. This paper examines Poynting's experiment and his interpretation of it in the context of the rest of his life and work.

  14. Large earthquakes and creeping faults

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  15. Mobile Bay, Alabama area seen in Skylab 4 Earth Resources Experiment Package

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A near vertical view of the Mobile Bay, Alabama area seen in this Skylab 4 Earth Resources Experiment Package S190-B (five-inch earth terrain camera) photograph taken from the Skylab space station in earth orbit. North of Mobile the Tombigbee and Alabama Rivers join to form the Mobile River. Detailed configuration of the individual stream channels and boundaries can be defined as the Mobile River flows into Mobile Bay and into the Gulf of Mexico. The Mobile River Valley with its numerous stream channels is a distinct light shade in contrast to the dark green shade of the adjacent areas. The red coloration of Mobile Bay reflects the sediment load carried into the bay by the rivers. The westerly movement of the shore currents at the mouth of Mobile Bay is shown by the contrasting light blue of the sediment-laden current the the blue of the Gulf. Agricultural areas east and west of Mobile Bay are characterized by a rectangular pattern in green to white shades. Color variations may reflect

  16. An experiment to study energetic particle fluxes in and beyond the earth's outer magnetosphere

    NASA Technical Reports Server (NTRS)

    Anderson, K. A.; Lin, R. P.; Paoli, R. J.; Parks, G. K.; Lin, C. S.; Reme, H.; Bosqued, J. M.; Martel, F.; Cotin, F.; Cros, A.

    1978-01-01

    This experiment is designed to take advantage of the ISEE Mother/Daughter dual spacecraft system to study energetic particle phenomena in the earth's outer magnetosphere and beyond. Large geometric factor fixed voltage electrostatic analyzers and passively cooled semiconductor detector telescopes provide high time resolution coverage of the energy range from 1.5 to 300 keV for both ions and electrons. Essentially identical instrumentation is placed on the two spacecraft to separate temporal from spatial effects in the observed particle phenomena.

  17. Does fault strengthening in laboratory rock friction experiments really depend primarily upon time and not slip?

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Pathikrit; Rubin, Allan M.; Beeler, Nicholas M.

    2017-08-01

    The popular constitutive formulations of rate-and-state friction offer two end-member views on whether friction evolves only with slip (Slip law) or with time even without slip (Aging law). While rate stepping experiments show support for the Slip law, laboratory-observed frictional behavior near-zero slip rates has traditionally been inferred as supporting Aging law style time-dependent healing, in particular, from the slide-hold-slide experiments of Beeler et al. (1994). Using a combination of new analytical results and explicit numerical (Bayesian) inversion, we show instead that the slide-hold-slide data of Beeler et al. (1994) favor slip-dependent state evolution during holds. We show that, while the stiffness-independent rate of growth of peak stress (following reslides) with hold duration is a property shared by both the Aging and (under a more restricted set of parameter combinations) Slip laws, the observed stiffness dependence of the rate of stress relaxation during long holds is incompatible with the Aging law with constant rate-state parameters. The Slip law consistently fits the evolution of the stress minima at the end of the holds well, whether fitting jointly with peak stresses or otherwise. But neither the Aging nor Slip laws fit all the data well when a - b is constrained to values derived from prior velocity steps. We also attempted to fit the evolution of stress peaks and minima with the Kato-Tullis hybrid law and the shear stress-dependent Nagata law, both of which, even with the freedom of an extra parameter, generally reproduced the best Slip law fits to the data.

  18. Mobile Bay, Alabama area seen in Skylab 4 Earth Resources Experiment Package

    1974-02-01

    SL4-92-300 (February 1974) --- A near vertical view of the Mobile Bay, Alabama area is seen in this Skylab 4 Earth Resources Experiments Package S190-B (five-inch earth terrain camera) photograph taken from the Skylab space station in Earth orbit. North of Mobile the Tombigbee and Alabama Rivers join to form the Mobile River. Detailed configuration of the individual stream channels and boundaries can be defined as the Mobile River flows into Mobile Bay, and thence into the Gulf of Mexico. The Mobile River Valley with its numerous stream channels is a distinct light shade in contrast to the dark green shade of the adjacent areas. The red coloration of Mobile Bay reflects the sediment load carried into the Bay by the rivers. Variations in red color indicate sediment load and the current paths within Mobile Bay. The waterly movement of the along shore currents at the mouth of Mobile Bay is shown by the contrasting light blue of the sediment-laden current and the blue of the Gulf predominately. Agricultural areas east and west of Mobile Bay are characterized by a rectangular pattern in green to white shades. Color variations may reflect the type and growth cycle of crops. Agricultural areas (light gray-greens) are also clearly visible in other parts of the photograph. Interstate 10 extends from near Pascagoula, Mississippi eastward through Mobile to the outskirts of Pensacola, Florida. Analysis of the EREP photographic data will be undertaken by the U.S. Corps of Engineers to determine bay dynamic processes. Federal agencies participating with NASA on the EREP project are the Departments of Agriculture, Commerce, Interior, the Environmental Protection Agency and the Corps of Engineers. All EREP photography is available to the public through the Department of Interior's Earth Resources Observations Systems Data Center, Sioux Falls, South Dakota. 57198 Photo credit: NASA

  19. First decadal lunar results from the Moon and Earth Radiation Budget Experiment.

    PubMed

    Matthews, Grant

    2018-03-01

    A need to gain more confidence in computer model predictions of coming climate change has resulted in greater analysis of the quality of orbital Earth radiation budget (ERB) measurements being used today to constrain, validate, and hence improve such simulations. These studies conclude from time series analysis that for around a quarter of a century, no existing satellite ERB climate data record is of a sufficient standard to partition changes to the Earth from those of un-tracked and changing artificial instrumentation effects. This led to the creation of the Moon and Earth Radiation Budget Experiment (MERBE), which instead takes existing decades old climate data to a higher calibration standard using thousands of scans of Earth's Moon. The Terra and Aqua satellite ERB climate records have been completely regenerated using signal-processing improvements, combined with a substantial increase in precision from more comprehensive in-flight spectral characterization techniques. This study now builds on previous Optical Society of America work by describing new Moon measurements derived using accurate analytical mapping of telescope spatial response. That then allows a factor of three reduction in measurement noise along with an order of magnitude increase in the number of retrieved independent lunar results. Given decadal length device longevity and the use of solar and thermal lunar radiance models to normalize the improved ERB results to the International System of Units traceable radiance scale of the "MERBE Watt," the same established environmental time series analysis techniques are applied to MERBE data. They evaluate it to perhaps be of sufficient quality to immediately begin narrowing the largest of climate prediction uncertainties. It also shows that if such Terra/Aqua ERB devices can operate into the 2020s, it could become possible to halve these same uncertainties decades sooner than would be possible with existing or even planned new observing systems.

  20. Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2003-10-01

    investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.

  1. Haze production rates in super-Earth and mini-Neptune atmosphere experiments

    NASA Astrophysics Data System (ADS)

    Hörst, Sarah M.; He, Chao; Lewis, Nikole K.; Kempton, Eliza M.-R.; Marley, Mark S.; Morley, Caroline V.; Moses, Julianne I.; Valenti, Jeff A.; Vuitton, Véronique

    2018-04-01

    Numerous Solar System atmospheres possess photochemically generated hazes, including the characteristic organic hazes of Titan and Pluto. Haze particles substantially impact atmospheric temperature structures and may provide organic material to the surface of a world, potentially affecting its habitability. Observations of exoplanet atmospheres suggest the presence of aerosols, especially in cooler (<800 K), smaller (<0.3× Jupiter's mass) exoplanets. It remains unclear whether the aerosols muting the spectroscopic features of exoplanet atmospheres are condensate clouds or photochemical hazes1-3, which is difficult to predict from theory alone4. Here, we present laboratory haze simulation experiments that probe a broad range of atmospheric parameters relevant to super-Earth- and mini-Neptune-type planets5, the most frequently occurring type of planet in our galaxy6. It is expected that photochemical haze will play a much greater role in the atmospheres of planets with average temperatures below 1,000 K (ref. 7), especially those planets that may have enhanced atmospheric metallicity and/or enhanced C/O ratios, such as super-Earths and Neptune-mass planets8-12. We explored temperatures from 300 to 600 K and a range of atmospheric metallicities (100×, 1,000× and 10,000× solar). All simulated atmospheres produced particles, and the cooler (300 and 400 K) 1,000× solar metallicity (`H2O-dominated' and CH4-rich) experiments exhibited haze production rates higher than our standard Titan simulation ( 10 mg h-1 versus 7.4 mg h-1 for Titan13). However, the particle production rates varied greatly, with measured rates as low as 0.04 mg h-1 (for the case with 100× solar metallicity at 600 K). Here, we show that we should expect great diversity in haze production rates, as some—but not all—super-Earth and mini-Neptune atmospheres will possess photochemically generated haze.

  2. Haze production rates in super-Earth and mini-Neptune atmosphere experiments

    NASA Astrophysics Data System (ADS)

    Hörst, Sarah M.; He, Chao; Lewis, Nikole K.; Kempton, Eliza M.-R.; Marley, Mark S.; Morley, Caroline V.; Moses, Julianne I.; Valenti, Jeff A.; Vuitton, Véronique

    2018-03-01

    Numerous Solar System atmospheres possess photochemically generated hazes, including the characteristic organic hazes of Titan and Pluto. Haze particles substantially impact atmospheric temperature structures and may provide organic material to the surface of a world, potentially affecting its habitability. Observations of exoplanet atmospheres suggest the presence of aerosols, especially in cooler (<800 K), smaller (<0.3× Jupiter's mass) exoplanets. It remains unclear whether the aerosols muting the spectroscopic features of exoplanet atmospheres are condensate clouds or photochemical hazes1-3, which is difficult to predict from theory alone4. Here, we present laboratory haze simulation experiments that probe a broad range of atmospheric parameters relevant to super-Earth- and mini-Neptune-type planets5, the most frequently occurring type of planet in our galaxy6. It is expected that photochemical haze will play a much greater role in the atmospheres of planets with average temperatures below 1,000 K (ref. 7), especially those planets that may have enhanced atmospheric metallicity and/or enhanced C/O ratios, such as super-Earths and Neptune-mass planets8-12. We explored temperatures from 300 to 600 K and a range of atmospheric metallicities (100×, 1,000× and 10,000× solar). All simulated atmospheres produced particles, and the cooler (300 and 400 K) 1,000× solar metallicity (`H2O-dominated' and CH4-rich) experiments exhibited haze production rates higher than our standard Titan simulation ( 10 mg h-1 versus 7.4 mg h-1 for Titan13). However, the particle production rates varied greatly, with measured rates as low as 0.04 mg h-1 (for the case with 100× solar metallicity at 600 K). Here, we show that we should expect great diversity in haze production rates, as some—but not all—super-Earth and mini-Neptune atmospheres will possess photochemically generated haze.

  3. Comparative Analysis of Thaumatin Crystals Grown on Earth and in Microgravity. Experiment 23

    NASA Technical Reports Server (NTRS)

    Ng, Joseph D.; Lorber, Bernard; Giege, Richard; Koszelak, Stanley; Day, John; Greenwood, Aaron; McPherson, Alexander

    1998-01-01

    The protein thaumatin was studied as a model macromolecule for crystallization in microgravity environment experiments conducted on two U.S. Space Shuttle missions (second United States Microgravity Laboratory (USML-2) and Life and Microgravity Spacelab (LMS)). In this investigation we evaluated and compared the quality of space- and Earth-grown thaumatin crystals using x-ray diffraction analysis and characterized them according to crystal size, diffraction resolution limit, and mosaicity. Two different approaches for growing thaumatin crystals in the microgravity environment, dialysis and liquid-liquid diffusion, were employed as a joint experiment by our two investigative teams. Thaumatin crystals grown under a microgravity environment were generally larger in volume with fewer total crystals. They diffracted to significantly higher resolution and with improved diffraction properties as judged by relative Wilson plots. The mosaicity for space-grown crystals was significantly less than for those grown on Earth. Increasing concentrations of protein in the crystallization chambers under microgravity lead to larger crystals. The data presented here lend further support to the idea that protein crystals of improved quality can be obtained in a microgravity environment.

  4. Refurbishment of the cryogenic coolers for the Skylab earth resources experiment package

    NASA Technical Reports Server (NTRS)

    Smithson, J. C.; Luksa, N. C.

    1975-01-01

    Skylab Earth Resources Experiment Package (EREP) experiments, S191 and S192, required a cold temperature reference for operation of a spectrometer. This cold temperature reference was provided by a subminiature Stirling cycle cooler. However, the failure of the cooler to pass the qualification test made it necessary for additional cooler development, refurbishment, and qualification. A description of the failures and the cause of these failures for each of the coolers is presented. The solutions to the various failure modes are discussed along with problems which arose during the refurbishment program. The rationale and results of various tests are presented. The successful completion of the cryogenic cooler refurbishment program resulted in four of these coolers being flown on Skylab. The system operation during the flight is presented.

  5. Visual Earth observation performance in the space environment. Human performance measurement 4: Flight experiments

    NASA Technical Reports Server (NTRS)

    Huth, John F.; Whiteley, James D.; Hawker, John E.

    1993-01-01

    A wide variety of secondary payloads have flown on the Space Transportation System (STS) since its first flight in the 1980's. These experiments have typically addressed specific issues unique to the zero-gravity environment. Additionally, the experiments use the experience and skills of the mission and payload specialist crew members to facilitate data collection and ensure successful completion. This paper presents the results of the Terra Scout experiment, which flew aboard STS-44 in November 1991. This unique Earth Observation experiment specifically required a career imagery analyst to operate the Spaceborne Direct-View Optical System (SpaDVOS), a folded optical path telescope system designed to mount inside the shuttle on the overhead aft flight deck windows. Binoculars and a small telescope were used as backup optics. Using his imagery background, coupled with extensive target and equipment training, the payload specialist was tasked with documenting the following: (1) the utility of the equipment; (2) his ability to acquire and track ground targets; (3) the level of detail he could discern; (4) the atmospheric conditions; and (5) other in-situ elements which contributed to or detracted from his ability to analyze targets. Special emphasis was placed on the utility of a manned platform for research and development of future spaceborne sensors. The results and lessons learned from Terra Scout will be addressed including human performance and equipment design issues.

  6. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  7. Kinematics and dynamics of salt movement driven by sub-salt normal faulting and supra-salt sediment accumulation - combined analogue experiments and analytical calculations

    NASA Astrophysics Data System (ADS)

    Warsitzka, Michael; Kukowski, Nina; Kley, Jonas

    2017-04-01

    In extensional sedimentary basins, the movement of ductile salt is mainly controlled by the vertical displacement of the salt layer, differential loading due to syn-kinematic deposition, and tectonic shearing at the top and the base of the salt layer. During basement normal faulting, salt either tends to flow downward to the basin centre driven by its own weight or it is squeezed upward due to differential loading. In analogue experiments and analytical models, we address the interplay between normal faulting of the sub-salt basement, compaction and density inversion of the supra-salt cover and the kinematic response of the ductile salt layer. The analogue experiments consist of a ductile substratum (silicone putty) beneath a denser cover layer (sand mixture). Both layers are displaced by normal faults mimicked through a downward moving block within the rigid base of the experimental apparatus and the resulting flow patterns in the ductile layer are monitored and analysed. In the computational models using an analytical approximative solution of the Navier-Stokes equation, the steady-state flow velocity in an idealized natural salt layer is calculated in order to evaluate how flow patterns observed in the analogue experiments can be translated to nature. The analytical calculations provide estimations of the prevailing direction and velocity of salt flow above a sub-salt normal fault. The results of both modelling approaches show that under most geological conditions salt moves downwards to the hanging wall side as long as vertical offset and compaction of the cover layer are small. As soon as an effective average density of the cover is exceeded, the direction of the flow velocity reverses and the viscous material is squeezed towards the elevated footwall side. The analytical models reveal that upward flow occurs even if the average density of the overburden does not exceed the density of salt. By testing various scenarios with different layer thicknesses

  8. Characteristics of the Nordic Seas overflows in a set of Norwegian Earth System Model experiments

    NASA Astrophysics Data System (ADS)

    Guo, Chuncheng; Ilicak, Mehmet; Bentsen, Mats; Fer, Ilker

    2016-08-01

    Global ocean models with an isopycnic vertical coordinate are advantageous in representing overflows, as they do not suffer from topography-induced spurious numerical mixing commonly seen in geopotential coordinate models. In this paper, we present a quantitative diagnosis of the Nordic Seas overflows in four configurations of the Norwegian Earth System Model (NorESM) family that features an isopycnic ocean model. For intercomparison, two coupled ocean-sea ice and two fully coupled (atmosphere-land-ocean-sea ice) experiments are considered. Each pair consists of a (non-eddying) 1° and a (eddy-permitting) 1/4° horizontal resolution ocean model. In all experiments, overflow waters remain dense and descend to the deep basins, entraining ambient water en route. Results from the 1/4° pair show similar behavior in the overflows, whereas the 1° pair show distinct differences, including temperature/salinity properties, volume transport (Q), and large scale features such as the strength of the Atlantic Meridional Overturning Circulation (AMOC). The volume transport of the overflows and degree of entrainment are underestimated in the 1° experiments, whereas in the 1/4° experiments, there is a two-fold downstream increase in Q, which matches observations well. In contrast to the 1/4° experiments, the coarse 1° experiments do not capture the inclined isopycnals of the overflows or the western boundary current off the Flemish Cap. In all experiments, the pathway of the Iceland-Scotland Overflow Water is misrepresented: a major fraction of the overflow proceeds southward into the West European Basin, instead of turning westward into the Irminger Sea. This discrepancy is attributed to excessive production of Labrador Sea Water in the model. The mean state and variability of the Nordic Seas overflows have significant consequences on the response of the AMOC, hence their correct representations are of vital importance in global ocean and climate modelling.

  9. Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry

    NASA Technical Reports Server (NTRS)

    Brandis, A. M.; Cruden, B. A.

    2017-01-01

    Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.

  10. MER surface fault protection system

    NASA Technical Reports Server (NTRS)

    Neilson, Tracy

    2005-01-01

    The Mars Exploration Rovers surface fault protection design was influenced by the fact that the solar-powered rovers must recharge their batteries during the day to survive the night. the rovers needed to autonomously maintain thermal stability, initiate safe and reliable communication with orbiting assets or directly to Earth, while maintaining energy balance. This paper will describe the system fault protection design for the surface phase of the mission.

  11. The UAH Spinning Terrella Experiment: A Laboratory Analog for the Earth's Magnetosphere

    NASA Technical Reports Server (NTRS)

    Sheldon, R. B.; Gallagher, D. L.; Craven, P. D.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The UAH Spinning Terrella Experiment has been modified to include the effect of a second magnet. This is a simple laboratory demonstration of the well-known double-dipole approximation to the Earth's magnetosphere. In addition, the magnet has been biassed $\\sim$-400V which generates a DC glow discharge and traps it in a ring current around the magnet. This ring current is easily imaged with a digital camera and illustrates several significant topological properties of a dipole field. In particular, when the two dipoles are aligned, and therefore repel, they emulate a northward IMF Bz magnetosphere. Such a geometry traps plasma in the high latitude cusps as can be clearly seen in the movies. Likewise, when the two magnets are anti-aligned, they emulate a southward IMF Bz magnetosphere with direct feeding of plasma through the x-line. We present evidence for trapping and heating of the plasma, comparing the dipole-trapped ring current to the cusp-trapped population. We also present a peculiar asymmetric ring current produced in by the plasma at low plasma densities. We discuss the similarities and dissimilarities of the laboratory analog to the collisionless Earth plasma, and implications for the interpretation of IMAGE data.

  12. Composition of the earth's atmosphere by shock-layer radiometry during the PAET entry probe experiment.

    NASA Technical Reports Server (NTRS)

    Whiting, E. E.; Arnold, J. O.; Page, W. A.; Reynolds, R. M.

    1973-01-01

    A determination of the composition of the earth's atmosphere obtained from onboard radiometer measurements of the spectra emitted from the bow shock layer of a high-speed entry probe is reported. The N2, O2, CO2, and noble gas concentrations in the earth's atmosphere were determined to good accuracy by this technique. The results demonstrate unequivocally the feasibility of determining the composition of an unknown planetary atmosphere by means of a multichannel radiometer viewing optical emission from the heated atmospheric gases in the region between the bow shock wave and the vehicle surface. The spectral locations in this experiment were preselected to enable the observation of CN violet, N2(+) first negative and atomic oxygen emission at 3870, 3910, and 7775 A, respectively. The atmospheric gases were heated and compressed by the shock wave to a peak temperature of about 6100 K and a corresponding pressure of 0.4 atm. Complete descriptions of the data analysis technique and the onboard radiometer and its calibration are given.

  13. Partitioning experiments in the laser-heated diamond anvil cell: volatile content in the Earth's core.

    PubMed

    Jephcoat, Andrew P; Bouhifd, M Ali; Porcelli, Don

    2008-11-28

    The present state of the Earth evolved from energetic events that were determined early in the history of the Solar System. A key process in reconciling this state and the observable mantle composition with models of the original formation relies on understanding the planetary processing that has taken place over the past 4.5Ga. Planetary size plays a key role and ultimately determines the pressure and temperature conditions at which the materials of the early solar nebular segregated. We summarize recent developments with the laser-heated diamond anvil cell that have made possible extension of the conventional pressure limit for partitioning experiments as well as the study of volatile trace elements. In particular, we discuss liquid-liquid, metal-silicate (M-Sil) partitioning results for several elements in a synthetic chondritic mixture, spanning a wide range of atomic number-helium to iodine. We examine the role of the core as a possible host of both siderophile and trace elements and the implications that early segregation processes at deep magma ocean conditions have for current mantle signatures, both compositional and isotopic. The results provide some of the first experimental evidence that the core is the obvious replacement for the long-sought, deep mantle reservoir. If so, they also indicate the need to understand the detailed nature and scale of core-mantle exchange processes, from atomic to macroscopic, throughout the age of the Earth to the present day.

  14. Environmental optimization and shielding for NMR experiments and imaging in the earth's magnetic field.

    PubMed

    Favre, B; Bonche, J P; Meheir, H; Peyrin, J O

    1990-02-01

    For many years, a number of laboratories have been working on the applications of very low field NMR. In 1985, our laboratory presented the first NMR images using the earth's magnetic field. However, the use of this technique was limited by the weakness of the signal and the disturbing effects of the environment on the signal-to-noise ratio and on the homogeneity of the static magnetic field. Therefore experiments has to be performed in places with low environmental disturbances, such as open country or large parks. In 1986, we installed a new station in Lyon, in the town's hostile environment. Good NMR signals can now be obtained (with a signal-to-noise ratio better than 200 and a time constant T2 better than 3s for 200-mnl water samples and at a temperature of about 40 degrees C). We report the terrace roof of our faculty building. Gradient coils were used to correct the local inhomogeneities of the earth's magnetic field. We show FIDs and MR images of water-filled tubes made with or without these improvements.

  15. Advanced Concepts, Technologies and Flight Experiments for NASA's Earth Science Enterprise

    NASA Technical Reports Server (NTRS)

    Meredith, Barry D.

    2000-01-01

    Over the last 25 years, NASA Langley Research Center (LaRC) has established a tradition of excellence in scientific research and leading-edge system developments, which have contributed to improved scientific understanding of our Earth system. Specifically, LaRC advances knowledge of atmospheric processes to enable proactive climate prediction and, in that role, develops first-of-a-kind atmospheric sensing capabilities that permit a variety of new measurements to be made within a constrained enterprise budget. These advances are enabled by the timely development and infusion of new, state-of-the-art (SOA), active and passive instrument and sensor technologies. In addition, LaRC's center-of-excellence in structures and materials is being applied to the technological challenges of reducing measurement system size, mass, and cost through the development and use of space-durable materials; lightweight, multi-functional structures; and large deployable/inflatable structures. NASA Langley is engaged in advancing these technologies across the full range of readiness levels from concept, to components, to prototypes, to flight experiments, and on to actual science mission infusion. The purpose of this paper is to describe current activities and capabilities, recent achievements, and future plans of the integrated science, engineering, and technology team at Langley Research Center who are working to enable the future of NASA's Earth Science Enterprise.

  16. Observations of premonitory acoustic emission and slip nucleation during a stick slip experiment in smooth faulted Westerly granite

    Thompson, B.D.; Young, R.P.; Lockner, D.A.

    2005-01-01

    To investigate laboratory earthquakes, stick-slip events were induced on a saw-cut Westerly granite sample by triaxial loading at 150 MPa confining pressure. Acoustic emissions (AE) were monitored using an innovative continuous waveform recorder. The first motion of each stick slip was recorded as a large-amplitude AE signal. These events source locate onto the saw-cut fault plane, implying that they represent the nucleation sites of the dynamic failure stick-slip events. The precise location of nucleation varied between events and was probably controlled by heterogeneity of stress or surface conditions on the fault. The initial nucleation diameter of each dynamic instability was inferred to be less than 3 mm. A small number of AE were recorded prior to each macro slip event. For the second and third slip events, premonitory AE source mechanisms mimic the large scale fault plane geometry. Copyright 2005 by the American Geophysical Union.

  17. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    NASA Astrophysics Data System (ADS)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  18. Shuttle imaging radar views the Earth from Challenger: The SIR-B experiment

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Cimino, J. B.; Holt, B.; Ruzek, M. R.

    1986-01-01

    In October 1984, SIR-B obtained digital image data of about 6.5 million km2 of the Earth's surface. The coverage is mostly of selected experimental test sites located between latitudes 60 deg north and 60 deg south. Programmed adjustments made to the look angle of the steerable radar antenna and to the flight attitude of the shuttle during the mission permitted collection of multiple-incidence-angle coverage or extended mapping coverage as required for the experiments. The SIR-B images included here are representative of the coverage obtained for scientific studies in geology, cartography, hydrology, vegetation cover, and oceanography. The relations between radar backscatter and incidence angle for discriminating various types of surfaces, and the use of multiple-incidence-angle SIR-B images for stereo measurement and viewing, are illustrated with examples. Interpretation of the images is facilitated by corresponding images or photographs obtained by different sensors or by sketch maps or diagrams.

  19. Mississippi Sound remote sensing study. [NASA Earth Resources Laboratory seasonal experiments

    NASA Technical Reports Server (NTRS)

    Atwell, B. H.; Thomann, G. C.

    1973-01-01

    A study of the Mississippi Sound was initiated in early 1971 by personnel of NASA Earth Resources Laboratory. Four separate seasonal experiments consisting of quasi-synoptic remote and surface measurements over the entire area were planned. Approximately 80 stations distributed throughout Mississippi Sound were occupied. Surface water temperature and secchi extinction depth were measured at each station and water samples were collected for water quality analyses. The surface distribution of three water parameters of interest from a remote sensing standpoint - temperature, salinity and chlorophyll content - are displayed in map form. Areal variations in these parameters are related to tides and winds. A brief discussion of the general problem of radiative measurements of water temperature is followed by a comparison of remotely measured temperatures (PRT-5) to surface vessel measurements.

  20. Skylab program earth resources experiment package: Ground truth data for test sites (SL-2)

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Field measurements were performed at selected ground sites in order to provide comparative calibration measurements of sensors for the Earth Resources Experiment Package. Specifically, the solar radiation (400 to 1300 namometers) and thermal radiation (8-14 micrometers) were measured. Sites employed for the thermal measurements consisted of warm and cold water lakes. The thermal brightness temperature of the lake water, the temperature and humidity profile above the lake, and near surface meteorology (wind speed, pressure, etc.) were measured near the time of overpass. Sites employed for the solar radiation measurements were two desert type sites. Ground measurements consisted of: (1) direct solar radiation - optical depth; (2) diffuse solar radiation; (3) total solar radiation, (4) target directional (normal) reflectance; (5) target hemispherical reflectance; and (6) near surface meteorology.

  1. Science Results from Colorado Student Space Weather Experiment (CSSWE): Energetic Particle Distribution in Near Earth Environment

    NASA Astrophysics Data System (ADS)

    Li, Xinlin

    2013-04-01

    The Colorado Student Space Weather Experiment (CSSWE) is a 3-unit (10cm x 10cm x 30cm) CubeSat mission funded by the National Science Foundation, launched into a low-Earth, polar orbit on 13 September 2012 as a secondary payload under NASA's Educational Launch of Nanosatellites (ELaNa) program. The science objectives of CSSWE are to investigate the relationship of the location, magnitude, and frequency of solar flares to the timing, duration, and energy spectrum of solar energetic particles reaching Earth, and to determine the precipitation loss and the evolution of the energy spectrum of trapped radiation belt electrons. CSSWE contains a single science payload, the Relativistic Electron and Proton Telescope integrated little experiment (REPTile), which is a miniaturization of the Relativistic Electron and Proton Telescope (REPT) built at the Laboratory for Atmospheric and Space Physics for NASA/Van Allen Probes mission, which consists of two identical spacecraft, launched 30 August 2012, that traverse the heart of the radiation belts in a low inclination orbit. CSSWE's REPTile is designed to measure the directional differential flux of protons ranging from 10 to 40 MeV and electrons from 0.5 to >3.3 MeV. The commissioning phase was completed and REPTile was activated on 4 October 2012. The data are very clean, far exceeding expectations! A number of engineering challenges had to be overcome to achieve such clean measurements under the mass and power limits of a CubeSat. The CSSWE is also an ideal class project, providing training for the next generation of engineers and scientists over the full life-cycle of a satellite project.

  2. Earth Observations

    2011-05-28

    ISS028-E-006059 (28 May 2011) --- One of the Expedition 28 crew members, photographing Earth images onboard the International Space Station while docked with the space shuttle Endeavour and flying at an altitude of just under 220 miles, captured this frame of the Salton Sea. The body of water, easily identifiable from low orbit spacecraft, is a saline, endorheic rift lake located directly on the San Andreas Fault. The agricultural area is within the Coachella Valley.

  3. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  4. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  5. 1999-2003 Shortwave Characterizations of Earth Radiation Budget Satellite (ERBS)/Earth Radiation Budget Experiment (ERBE) Broadband Active Cavity Radiometer Sensors

    NASA Technical Reports Server (NTRS)

    Lee, Robert B., III; Smith, George L.; Wong, Takmeng

    2008-01-01

    From October 1984 through May 2005, the NASA Earth Radiation Budget Satellite (ERBS/ )/Earth Radiation Budget Experiment (ERBE)ERBE nonscanning active cavity radiometers (ACR) were used to monitor long-term changes in the earth radiation budget components of the incoming total solar irradiance (TSI), earth-reflected TSI, and earth-emitted outgoing longwave radiation (OLR). From September1984 through September 1999, using on-board calibration systems, the ERBS/ERBE ACR sensor response changes, in gains and offsets, were determined from on-orbit calibration sources and from direct observations of the incoming TSI through calibration solar ports at measurement precision levels approaching 0.5 W/sq m , at satellite altitudes. On October 6, 1999, the onboard radiometer calibration system elevation drive failed. Thereafter, special spacecraft maneuvers were performed to observe cold space and the sun in order to define the post-September 1999 geometry of the radiometer measurements, and to determine the October 1999-September 2003 ERBS sensor response changes. Analyses of these special solar and cold space observations indicate that the radiometers were pointing approximately 16 degrees away from the spacecraft nadir and on the anti-solar side of the spacecraft. The special observations indicated that the radiometers responses were stable at precision levels approaching 0.5 W/sq m . In this paper, the measurement geometry determinations and the determinations of the radiometers gain and offset are presented, which will permit the accurate processing of the October 1999 through September 2003 ERBE data products at satellite and top-of-the-atmosphere altitudes.

  6. Pairing Essential Climate Science with Sustainable Energy Information: the "EARTH-The Operators' Manual" experiment

    NASA Astrophysics Data System (ADS)

    Akuginow, E.; Alley, R. B.; Haines-Stiles, G.

    2010-12-01

    considerable challenge of supplying clean energy to a growing population. Additional scenes have been filmed in Brazil, Spain, China, Morocco, Scotland, and across America, including at the National Renewable Energy Lab. in Denver, CO, and New Orleans. Program 3 (presently untitled and targeted for 2012) will feature American communities seeking to increase energy efficiency and minimize carbon emissions. The Fall 2010 AGU presentation will include video clips from the series, initial findings from focus groups (coordinated by project evaluator, Rockman Et Al) as to what information has been found most compelling to potential audiences, and a description of plans being developed by the project's science center partners in San Diego CA, Portland OR, Minneapolis-St. Paul, Fort Worth TX and Raleigh NC. "EARTH-The Operators' Manual" is an experiment to determine the effectiveness of these activities to reach audiences who, according to surveys, have actually become less convinced of anthropogenic climate change, while remaining supportive of investments in advancing clean energy opportunities.

  7. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  8. How to Communicate Near Earth Objects with the Public - Klet Observatory Experience

    NASA Astrophysics Data System (ADS)

    Ticha, Jana; Tichy, Milos; Kocer, Michal

    2015-08-01

    Near-Earth Object (NEO) research is counted among the most popular parts of communicating astronomy with the public. Increasing research results in the field of Near-Earth Objects as well as impact hazard investigations cause growing interest among general public and media. Furthermore NEO related issues have outstanding educational value. So thus communicating NEO detection, NEO characterization, possible impact effects, space missions to NEOs, ways of mitigation and impact warnings with the public and media belong to the most important tasks of scientists and research institutions.Our institution represents an unique liaison of the small professional research institution devoted especially to NEO studies (the Klet Observatory, Czech Republic) and the educational and public outreach branch (the Observatory and Planetarium Ceske Budejovice, Czech Republic). This all has been giving us an excellent opportunity for bringing NEO information to wider audience. We have been obtaining a wide experience in communicating NEOs with the public more than twenty years.There is a wide spectrum of public outreach tools aimed to NEO research and hazard. As the most useful ones we consider two special on-line magazines (e-zins) devoted to asteroids (www.planetky.cz) and comets (www.komety.cz) in Czech language, educational multimedia presentations for schools at different levels in planetarium, summer excursions for wide public just at the Klet Observatory on the top of the Klet mountain, public lectures, meetings and exhibitions. It seems to be very contributing and favoured by public to have opportunities for more or less informal meetings just with NEO researchers from time to time. Very important part of NEO public outreach consists of continuous contact with journalists and media including press releases, interviews, news, periodical programs. An increasing role of social media is taken into account through Facebook and Twitter profiles.The essential goal of all mentioned NEO

  9. Earth analog image digitization of field, aerial, and lab experiment studies for Planetary Data System archiving.

    NASA Astrophysics Data System (ADS)

    Williams, D. A.; Nelson, D. M.

    2017-12-01

    A portion of the earth analog image archive at the Ronald Greeley Center for Planetary Studies (RGCPS)-the NASA Regional Planetary Information Facility at Arizona State University-is being digitized and will be added to the Planetary Data System (PDS) for public use. This will be a first addition of terrestrial data to the PDS specifically for comparative planetology studies. Digitization is separated into four tasks. First is the scanning of aerial photographs of volcanic and aeolian structures and flows. The second task is to scan field site images taken from ground and low-altitude aircraft of volcanic structures, lava flows, lava tubes, dunes, and wind streaks. The third image set to be scanned includes photographs of lab experiments from the NASA Planetary Aeolian Laboratory wind tunnels, vortex generator, and of wax models. Finally, rare NASA documents are being scanned and formatted as PDF files. Thousands of images are to be scanned for this project. Archiving of the data will follow the PDS4 standard, where the entire project is classified as a single bundle, with individual subjects (i.e., the Amboy Crater volcanic structure in the Mojave Desert of California) as collections. Within the collections, each image is considered a product, with a unique ID and associated XML document. Documents describing the image data, including the subject and context, will be included with each collection. Once complete, the data will be hosted by a PDS data node and available for public search and download. As one of the first earth analog datasets to be archived by the PDS, this project could prompt the digitizing and making available of historic datasets from other facilities for the scientific community.

  10. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  11. Understanding Coupled Earth-Surface Processes through Experiments and Models (Invited)

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kim, W.

    2013-12-01

    Traditionally, both numerical models and experiments have been purposefully designed to ';isolate' singular components or certain processes of a larger mountain to deep-ocean interconnected source-to-sink (S2S) transport system. Controlling factors driven by processes outside of the domain of immediate interest were treated and simplified as input or as boundary conditions. Increasingly, earth surface processes scientists appreciate feedbacks and explore these feedbacks with more dynamically coupled approaches to their experiments and models. Here, we discuss key concepts and recent advances made in coupled modeling and experimental setups. In addition, we emphasize challenges and new frontiers to coupled experiments. Experiments have highlighted the important role of self-organization; river and delta systems do not always need to be forced by external processes to change or develop characteristic morphologies. Similarly modeling f.e. has shown that intricate networks in tidal deltas are stable because of the interplay between river avulsions and the tidal current scouring with both processes being important to develop and maintain the dentritic networks. Both models and experiment have demonstrated that seemingly stable systems can be perturbed slightly and show dramatic responses. Source-to-sink models were developed for both the Fly River System in Papua New Guinea and the Waipaoa River in New Zealand. These models pointed to the importance of upstream-downstream effects and enforced our view of the S2S system as a signal transfer and dampening conveyor belt. Coupled modeling showed that deforestation had extreme effects on sediment fluxes draining from the catchment of the Waipaoa River in New Zealand, and that this increase in sediment production rapidly shifted the locus of offshore deposition. The challenge in designing coupled models and experiments is both technological as well as intellectual. Our community advances to make numerical model coupling more

  12. Forecast experiment: do temporal and spatial b value variations along the Calaveras fault portend M ≥ 4.0 earthquakes?

    Parsons, Tom

    2007-01-01

    The power law distribution of earthquake magnitudes and frequencies is a fundamental scaling relationship used for forecasting. However, can its slope (b value) be used on individual faults as a stress indicator? Some have concluded that b values drop just before large shocks. Others suggested that temporally stable low b value zones identify future large-earthquake locations. This study assesses the frequency of b value anomalies portending M ≥ 4.0 shocks versus how often they do not. I investigated M ≥ 4.0 Calaveras fault earthquakes because there have been 25 over the 37-year duration of the instrumental catalog on the most active southern half of the fault. With that relatively large sample, I conducted retrospective time and space earthquake forecasts. I calculated temporal b value changes in 5-km-radius cylindrical volumes of crust that were significant at 90% confidence, but these changes were poor forecasters of M ≥ 4.0 earthquakes. M ≥ 4.0 events were as likely to happen at times of high b values as they were at low ones. However, I could not rule out a hypothesis that spatial b value anomalies portend M ≥ 4.0 events; of 20 M ≥ 4 shocks that could be studied, 6 to 8 (depending on calculation method) occurred where b values were significantly less than the spatial mean, 1 to 2 happened above the mean, and 10 to 13 occurred within 90% confidence intervals of the mean and were thus inconclusive. Thus spatial b value variation might be a useful forecast tool, but resolution is poor, even on seismically active faults.

  13. Earth-satellite propagation above GHz: Papers from the 1972 spring URSI session on experiments utilizing the ATS-5 satellite

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J. (Compiler)

    1972-01-01

    Papers are reported from the Special Session on Earth-Satellite Propagation Above 10 GHz, presented at The 1972 Spring Meeting of the United States National Committee, International Union of Radio Science, April 1972, Washington, D. C. This session was devoted to propagation measurements associated with the Applications Technology Satellite (ATS-5), which provided the first operational earth-space links at frequencies above 15 GHz. A comprehensive summary is presented of the major results of the ATS-5 experiment measurements and related radiometric, radar and meteorological studies. The papers are organized around seven selected areas of interest, with the results of the various investigators combined into a single paper presented by a principal author for that area. A comprehensive report is provided on the results of the ATS-5 satellite to earth transmissions. A complete list of published reports and presentations related to the ATS-5 Millimeter Wave Experiment is included.

  14. The Kickstart of the Age of the Earth Race: Revisiting the Experiment of the Comte de Buffon at School

    ERIC Educational Resources Information Center

    Pincelli, M. M.; Prat, M. R.; Lescano, G. M.; Formichella, M. del C.; Brustle, M.; Otranto, S.

    2018-01-01

    In this work, the first experiment ever done to determine the age of the Earth is revisited. The benefits of its application at primary and secondary school levels are presented and discussed. In particular, emphasis is placed on the advantage of facing students with the challenges that scientists have had to overcome during the past three…

  15. Spatial autocorrelation of radiation measured by the Earth Radiation Budget Experiment: Scene inhomogeneity and reciprocity violation

    NASA Technical Reports Server (NTRS)

    Davies, Roger

    1994-01-01

    The spatial autocorrelation functions of broad-band longwave and shortwave radiances measured by the Earth Radiation Budget Experiment (ERBE) are analyzed as a function of view angle in an investigation of the general effects of scene inhomogeneity on radiation. For nadir views, the correlation distance of the autocorrelation function is about 900 km for longwave radiance and about 500 km for shortwave radiance, consistent with higher degrees of freedom in shortwave reflection. Both functions rise monotonically with view angle, but there is a substantial difference in the relative angular dependence of the shortwave and longwave functions, especially for view angles less than 50 deg. In this range, the increase with angle of the longwave functions is found to depend only on the expansion of pixel area with angle, whereas the shortwave functions show an additional dependence on angle that is attributed to the occlusion of inhomogeneities by cloud height variations. Beyond a view angle of about 50 deg, both longwave and shortwave functions appear to be affected by cloud sides. The shortwave autocorrelation functions do not satisfy the principle of directional reciprocity, thereby proving that the average scene is horizontally inhomogeneous over the scale of an ERBE pixel (1500 sq km). Coarse stratification of the measurements by cloud amount, however, indicates that the average cloud-free scene does satisfy directional reciprocity on this scale.

  16. GEOSTEP: A gravitation experiment in Earth-orbiting satellite to test the Equivalence Principle

    NASA Astrophysics Data System (ADS)

    Bonneville, R.

    2003-10-01

    Testing the Equivalence Principle has been recognized by the scientific community as a short-term prime objective for fundamental physics in space. In 1994, a Phase 0/A study of the GEOSTEP mission has been initiated by CNES in order to design a space experiment to test the Equivalence Principle to an accuracy of 10 -17, with the constraint to be compatible with the small versatile platform PROTEUS under study. The GEOSTEP payload comprises a set of four differential accelerometers placed at cryogenic temperature on board a drag-free, 3-axis stabilized satellite in low-Earth orbit. Each accelerometer contains a pair of test masses A-A, A-B, A-C, B-C (inner mass - outer mass) made of three different materials A, B, C with decreasing densities. The accelerometer concept is the fully electrostatic levitation and read-out device proposed by ONERA, called SAGE (Space Accelerometer for Gravitation Experiment). The drag-free and attitude control system (DFACS) is monitored by the common-mode data of the accelerometers along their three axes, while the possible violation signal is detected by the differential-mode data along the longitudinal sensitive axis. The cryostat is a single chamber supercritical Helium dewar designed by CEA. Helium boiling off from the dewar feeds a set of proportional gas thrusters performing the DFACS. Error analysis and data processing preparation is managed by OCA/CERGA. The satellite will be on a 6 am - 6 pm near-polar, near-circular, Sun-synchronous orbit, at an altitude of 600 to 900 km, depending on the atmospheric density at the time of launch. GEOSTEP could be launched in 2002; the nominal mission duration is at least four months.

  17. Earth Sciences as a Vehicle for Gifted Education--The Hong Kong Experience

    ERIC Educational Resources Information Center

    Murphy, Phillip J.; Chan, Lung Sang; Murphy, Elizabeth

    2012-01-01

    The development and delivery of an Earth-science-focused short course designed to prepare Hong Kong students for university level study is described. Earth sciences provide an inspirational and challenging context for learning and teaching in Hong Kong's increasingly skills-based curriculum. (Contains 3 figures and 4 online resources.)

  18. Investigation of Strategies to Promote Effective Teacher Professional Development Experiences in Earth Science

    ERIC Educational Resources Information Center

    Engelmann, Carol A.

    2014-01-01

    This dissertation serves as a call to geoscientists to share responsibility with K-12 educators for increasing Earth science literacy. When partnerships are created among K-12 educators and geoscientists, the synergy created can promote Earth science literacy in students, teachers, and the broader community. The research described here resulted in…

  19. Strong ground motions generated by earthquakes on creeping faults

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  20. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  1. Quantifying Anderson's fault types

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  2. PanEurasian Experiment (PEEX): Modelling Platform for Earth System Observations and Forecasting

    NASA Astrophysics Data System (ADS)

    Baklanov, Alexander; Mahura, Alexander; Penenko, Vladimir; Zilitinkevich, Sergej; Kulmala, Markku

    2014-05-01

    models, analysing scenarios, inverse modelling, modelling based on measurement needs and processes; • Model validation by remote sensing data and assimilation of satellite observations to constrain models to better understand processes, e.g., emissions and fluxes with top-down modelling; • Geophysical/ chemical model validation with experiments at various spatial and temporal scales. Added value of the comprehensive multi-platform observations and modeling; network of monitoring stations with the capacity to quantify those interactions between neighboring areas ranging from the Arctic and the Mediterranean to the Chinese industrial areas and the Asian steppes is needed. For example, apart from development of Russian stations in the PEEX area a strong co-operation with surrounding research infrastructures in the model of ACTRIS network needs to be established in order to obtain a global perspective of the emissions transport, transformation and ageing of pollutants incoming and exiting the PEEX area. The PEEX-MP aims to simulate and predict the physical aspects of the Earth system and to improve understanding of the bio-geochemical cycles in the PEEX domain, and beyond. The environmental change in this region implies that, from the point-of-view of atmospheric flow, the lower boundary conditions are changing. This is important for applications with immediate relevance for society, such as numerical weather prediction. The PEEX infrastructure will provide a unique view to the physical properties of the Earth surface, which can be used to improve assessment and prediction models. This will directly benefit citizens of the North in terms of better early warning of hazardous events, for instance. On longer time-scales, models of the bio-geochemical cycles in the PEEX domain absolutely need support from the new monitoring infra-structure to better measure and quantify soil and vegetation properties. In the most basic setup, the atmospheric and oceanic Global Circulation

  3. Innovating the Experience of Peer Learning and Earth Science Education in the Field

    NASA Astrophysics Data System (ADS)

    Scoates, J. S.; Hanano, D. W.; Weis, D.; Bilenker, L.; Sherman, S. B.; Gilley, B.

    2017-12-01

    development of professional skills in three key areas: (1) project and time management, (2) teamwork and communication, and (3) critical thinking and problem-solving. The MAGNET experience with peer learning represents a model that can readily be adapted for future field instruction in the Earth Sciences.

  4. Mechanisms, Monitoring and Modeling Earth Fissure generation and Fault activation due to subsurface Fluid exploitation (M3EF3): A UNESCO-IGCP project in partnership with the UNESCO-IHP Working Group on Land Subsidence

    NASA Astrophysics Data System (ADS)

    Teatini, P.; Carreon-Freyre, D.; Galloway, D. L.; Ye, S.

    2015-12-01

    Land subsidence due to groundwater extraction was recently mentioned as one of the most urgent threats to sustainable development in the latest UNESCO IHP-VIII (2014-2020) strategic plan. Although advances have been made in understanding, monitoring, and predicting subsidence, the influence of differential vertical compaction, horizontal displacements, and hydrostratigraphic and structural features in groundwater systems on localized near-surface ground ruptures is still poorly understood. The nature of ground failure may range from fissuring, i.e., formation of an open crack, to faulting, i.e., differential offset of the opposite sides of the failure plane. Ground ruptures associated with differential subsidence have been reported from many alluvial basins in semiarid and arid regions, e.g. China, India, Iran, Mexico, Saudi Arabia, Spain, and the United States. These ground ruptures strongly impact urban, industrial, and agricultural infrastructures, and affect socio-economic and cultural development. Leveraging previous collaborations, this year the UNESCO Working Group on Land Subsidence began the scientific cooperative project M3EF3 in collaboration with the UNESCO International Geosciences Programme (IGCP n.641; www.igcp641.org) to improve understanding of the processes involved in ground rupturing associated with the exploitation of subsurface fluids, and to facilitate the transfer of knowledge regarding sustainable groundwater management practices in vulnerable aquifer systems. The project is developing effective tools to help manage geologic risks associated with these types of hazards, and formulating recommendations pertaining to the sustainable use of subsurface fluid resources for urban and agricultural development in susceptible areas. The partnership between the UNESCO IHP and IGCP is ensuring that multiple scientific competencies required to optimally investigate earth fissuring and faulting caused by groundwater withdrawals are being employed.

  5. Providing Authentic Research Experiences for Pre-Service Teachers through UNH's Transforming Earth System Science Education (TESSE) Program

    NASA Astrophysics Data System (ADS)

    Varner, R. K.; Furman, T.; Porter, W.; Darwish, A.; Graham, K.; Bryce, J.; Brown, D.; Finkel, L.; Froburg, E.; Guertin, L.; Hale, S. R.; Johnson, J.; von Damm, K.

    2007-12-01

    The University of New Hampshire's Transforming Earth System Science Education (UNH TESSE) project is designed to enrich the education and professional development of in-service and pre-service teachers, who teach or will teach Earth science curricula. As part of this program, pre-service teachers participated in an eight- week summer Research Immersion Experience (RIE). The main goal of the RIE is to provide authentic research experiences in Earth system science for teachers early in their careers in an effort to increase future teachers` comfort and confidence in bringing research endeavors to their students. Moreover, authentic research experiences for teachers will complement teachers` efforts to enhance inquiry-based instruction in their own classrooms. Eighteen pre-service teachers associated with our four participating institutions - Dillard University (4), Elizabeth City State University (4), Pennsylvania State University (5), and University of New Hampshire (UNH) (5) participated in the research immersion experience. Pre-service teachers were matched with a faculty mentor who advised their independent research activities. Each pre-service teacher was expected to collect and analyze his or her own data to address their research question. Some example topics researched by participants included: processes governing barrier island formation, comparison of formation and track of hurricanes Hugo and Katrina, environmental consequences of Katrina, numerical models of meander formation, climatic impacts on the growth of wetland plants, and the visual estimation of hydrothermal vent properties. Participants culminated their research experience with a public presentation to an audience of scientists and inservice teachers.

  6. The United States earth resources survey program and ERTS experiments benefit highlights

    NASA Technical Reports Server (NTRS)

    Jaffe, L.

    1974-01-01

    With the launch of the first Earth Resources Technology Satellite in July 1972 a major new tool has become available for decision making in the assessment, exploitation, and management of the earth's resources on a national and international basis. The current status of the earth resources survey program is discussed and the future potential is reviewed. The supportive roles of all stages of the system, including surface, aircraft, and satellite components are noted. Specific cases of application of ERTS data are presented together with a discussion of benefits that might accrue. Need for cooperative, coordinated efforts between participants is emphasized.

  7. The World's Largest Experiment Manipulating Solar Energy Input To Earth Resumed In 2003

    NASA Astrophysics Data System (ADS)

    Ward, P. L.

    2010-12-01

    Small amounts of solar-ultraviolet-energy absorbing gases such as ozone, SO2, and NO2 play an unusually large role warming the atmosphere. A mere 3 to 8 ppmv ozone at elevations of 15 to 50 km and associated exothermic chemical reactions warm the atmosphere >50oC, forming the stratosphere. All three molecules have an asymmetric top shape that, unlike linear molecules of CO2, forms a permanent electromagnetic dipole enhancing interaction with electromagnetic radiation. Planck’s postulate (Energy = a constant times frequency) implies that solar ultraviolet energy strongly absorbed by SO2 is 43 times greater than infrared energy radiated by earth and strongly absorbed by CO2. Solar energy in the blue visible spectrum and ultraviolet causes electronic transitions and an absorption spectrum that is a continuum, absorbing far more energy per unit gas than spectral line absorption of infrared energy caused by rotational and vibrational transitions. Absorption of electromagnetic energy by atmospheric gases increases rapidly with increasing frequency, an observation not accounted for by the use of specific heat in atmospheric models to link energy flux with temperature. While SO2 in the stratosphere is oxidized to a sulfuric acid aerosol that reflects sunlight, cooling the earth, SO2 in the troposphere is oxidized much more slowly than commonly assumed. Well-documented concentrations of tens of ppbv SO2 emitted by humans burning fossil fuels, especially coal, in northern mid-latitudes are contemporaneous, with suitable time delays for warming the ocean, with increased global warming during the 20th century, greatest by nearly a factor of two in the northern hemisphere. A decrease by 18% of anthropogenic SO2 emissions between 1979 and 2000 aimed at reducing acid rain had the unintended effect of reducing the global mean rate of temperature increase to zero by 1998. By 2003, global SO2 emissions began to rise sharply due to the rapid increase in number of new coal

  8. Measuring the Value of Earth Observation Information with the Gravity Research and Climate Experiment (GRACE) Satellite

    NASA Astrophysics Data System (ADS)

    Bernknopf, R.; Kuwayama, Y.; Brookshire, D.; Macauley, M.; Zaitchik, B.; Pesko, S.; Vail, P.

    2014-12-01

    Determining how much to invest in earth observation technology depends in part on the value of information (VOI) that can be derived from the observations. We design a framework and then evaluate the value-in-use of the NASA Gravity Research and Climate Experiment (GRACE) for regional water use and reliability in the presence of drought. As a technology that allows measurement of water storage, the GRACE Data Assimilation System (DAS) provides information that is qualitatively different from that generated by other water data sources. It provides a global, reproducible grid of changes in surface and subsurface water resources on a frequent and regular basis. Major damages from recent events such as the 2012 Midwest drought and the ongoing drought in California motivate the need to understand the VOI from remotely sensed data such as that derived from GRACE DAS. Our conceptual framework models a dynamic risk management problem in agriculture. We base the framework on information from stakeholders and subject experts. The economic case for GRACE DAS involves providing better water availability information. In the model, individuals have a "willingness to pay" (wtp) for GRACE DAS - essentially, wtp is an expression of savings in reduced agricultural input costs and for costs that are influenced by regional policy decisions. Our hypothesis is that improvements in decision making can be achieved with GRACE DAS measurements of water storage relative to data collected from groundwater monitoring wells and soil moisture monitors that would be relied on in the absence of GRACE DAS. The VOI is estimated as a comparison of outcomes. The California wine grape industry has features that allow it to be a good case study and a basis for extrapolation to other economic sectors. We model water use in this sector as a sequential decision highlighting the attributes of GRACE DAS input as information for within-season production decisions as well as for longer-term water reliability.

  9. The Third Tibetan Plateau Atmospheric Scientific Experiment for Understanding the Earth-Atmosphere Coupled System

    NASA Astrophysics Data System (ADS)

    Zhao, P.; Xu, X.; Chen, F.; Guo, X.; Zheng, X.; Liu, L. P.; Hong, Y.; Li, Y.; La, Z.; Peng, H.; Zhong, L. Z.; Ma, Y.; Tang, S. H.; Liu, Y.; Liu, H.; Li, Y. H.; Zhang, Q.; Hu, Z.; Sun, J. H.; Zhang, S.; Dong, L.; Zhang, H.; Zhao, Y.; Yan, X.; Xiao, A.; Wan, W.; Zhou, X.

    2016-12-01

    The Third Tibetan Plateau atmospheric scientific experiment (TIPEX-III) was initiated jointly by the China Meteorological Administration, the National Natural Scientific Foundation, and the Chinese Academy of Sciences. This paper presents the background, scientific objectives, and overall experimental design of TIPEX-III. It was designed to conduct an integrated observation of the earth-atmosphere coupled system over the Tibetan Plateau (TP) from land surface, planetary boundary layer (PBL), troposphere, and stratosphere for eight to ten years by coordinating ground- and air-based measurement facilities for understanding spatial heterogeneities of complex land-air interactions, cloud-precipitation physical processes, and interactions between troposphere and stratosphere. TIPEX-III originally began in 2014, and is ongoing. It established multiscale land-surface and PBL observation networks over the TP and a tropospheric meteorological radiosonde network over the western TP, and executed an integrated observation mission for cloud-precipitation physical features using ground-based radar systems and aircraft campaigns and an observation task for atmospheric ozone, aerosol, and water vapor. The archive, management, and share policy of the observation data are also introduced herein. Some TIPEX-III data have been preliminarily applied to analyze the features of surface sensible and latent heat fluxes, cloud-precipitation physical processes, and atmospheric water vapor and ozone over the TP, and to improve the local precipitation forecast. Furthermore, TIPEX-III intends to promote greater scientific and technological cooperation with international research communities and broader organizations. Scientists working internationally are invited to participate in the field campaigns and to use the TIPEX-III data for their own research.

  10. Long-Term Soil Experiments: A Key to Managing Earth's Rapidly Changing Critical Zones

    NASA Astrophysics Data System (ADS)

    Richter, D., Jr.

    2014-12-01

    In a few decades, managers of Earth's Critical Zones (biota, humans, land, and water) will be challenged to double food and fiber production and diminish adverse effects of management on the wider environment. To meet these challenges, an array of scientific approaches is being used to increase understanding of Critical Zone functioning and evolution, and one amongst these approaches needs to be long-term soil field studies to move us beyond black boxing the belowground Critical Zone, i.e., to further understanding of processes driving changes in the soil environment. Long-term soil experiments (LTSEs) provide direct observations of soil change and functioning across time scales of decades, data critical for biological, biogeochemical, and environmental assessments of sustainability; for predictions of soil fertility, productivity, and soil-environment interactions; and for developing models at a wide range of temporal and spatial scales. Unfortunately, LTSEs globally are not in a good state, and they take years to mature, are vulnerable to loss, and even today remain to be fully inventoried. Of the 250 LTSEs in a web-based network, results demonstrate that soils and belowground Critical Zones are highly dynamic and responsive to human management. The objective of this study is to review the contemporary state of LTSEs and consider how they contribute to three open questions: (1) can soils sustain a doubling of food production in the coming decades without further impinging on the wider environment, (2) how do soils interact with the global C cycle, and (3) how can soil management establish greater control over nutrient cycling. While LTSEs produce significant data and perspectives for all three questions, there is on-going need and opportunity for reviews of the long-term soil-research base, for establishment of an efficiently run network of LTSEs aimed at sustainability and improving management control over C and nutrient cycling, and for research teams that

  11. Optimal trajectories for the Aeroassisted Flight Experiment. Part 1: Equations of motion in an Earth-fixed system

    NASA Technical Reports Server (NTRS)

    Miele, A.; Zhao, Z. G.; Lee, W. Y.

    1989-01-01

    The determination of optimal trajectories for the aeroassisted flight experiment (AFE) is discussed. The AFE refers to the study of the free flight of an autonomous spacecraft, shuttle-launched and shuttle-recovered. Its purpose is to gather atmospheric entry environmental data for use in designing aeroassisted orbital transfer vehicles (AOTV). It is assumed that: (1) the spacecraft is a particle of constant mass; (2) the Earth is rotating with constant angular velocity; (3) the Earth is an oblate planet, and the gravitational potential depends on both the radial distance and the latitude (harmonics of order higher than four are ignored); and (4) the atmosphere is at rest with respect to the Earth. Under these assumptions, the equations of motion for hypervelocity atmospheric flight (which can be used not only for AFE problems, but also for AOT problems and space shuttle problems) are derived in an Earth-fixed system. Transformation relations are supplied which allow one to pass from quantities computed in an Earth-fixed system to quantities computed in an inertial system, and vice versa.

  12. Rare Earth Element Partition Coefficients from Enstatite/Melt Synthesis Experiments

    NASA Technical Reports Server (NTRS)

    Schwandt, Craig S.; McKay, Gordon A.

    1997-01-01

    Enstatite (En(80)Fs(19)Wo(01)) was synthesized from a hypersthene normative basaltic melt doped at the same time with La, Ce, Nd, Sm, Eu, Dy, Er, Yb and Lu. The rare earth element concentrations were measured in both the basaltic glass and the enstatite. Rare earth element concentrations in the glass were determined by electron microprobe analysis with uncertainties less than two percent relative. Rare earth element concentrations in enstatite were determined by secondary ion mass spectrometry with uncertainties less than five percent relative. The resulting rare earth element partition signature for enstatite is similar to previous calculated and composite low-Ca pigeonite signatures, but is better defined and differs in several details. The partition coefficients are consistent with crystal structural constraints.

  13. How Do Normal Faults Grow?

    NASA Astrophysics Data System (ADS)

    Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.

    2015-12-01

    Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain

  14. Laboratory experiments on the impact disruption of iron meteorites at temperature of near-Earth space

    NASA Astrophysics Data System (ADS)

    Katsura, Takekuni; Nakamura, Akiko M.; Takabe, Ayana; Okamoto, Takaya; Sangen, Kazuyoshi; Hasegawa, Sunao; Liu, Xun; Mashimo, Tsutomu

    2014-10-01

    Iron meteorites and some M-class asteroids are generally understood to be fragments that were originally part of cores of differentiated planetesimals or part of local melt pools on primitive bodies. The parent bodies of iron meteorites may have formed in the terrestrial planet region, from which they were then scattered into the main belt (Bottke, W.F., Nesvorný, D., Grimm, R.E., Morbidelli, A., O'Brien, D.P. [2006]. Nature 439, 821-824). Therefore, a wide range of collisional events at different mass scales, temperatures, and impact velocities would have occurred between the time when the iron was segregated and the impact that eventually exposed the iron meteorites to interplanetary space. In this study, we performed impact disruption experiments of iron meteorite specimens as projectiles or targets at room temperature to increase understanding of the disruption process of iron bodies in near-Earth space. Our iron specimens (as projectiles or targets) were almost all smaller in size than their counterparts (as targets or projectiles, respectively). Experiments of impacts of steel specimens were also conducted for comparison. The fragment mass distribution of the iron material was different from that of rocks. In the iron fragmentation, a higher percentage of the mass was concentrated in larger fragments, probably due to the ductile nature of the material at room temperature. The largest fragment mass fraction f was dependent not only on the energy density but also on the size d of the specimen. We assumed a power-law dependence of the largest fragment mass fraction to initial peak pressure P0 normalized by a dynamic strength, Y, which was defined to be dependent on the size of the iron material. A least squares fit to the data of iron meteorite specimens resulted in the following relationship: f∝∝d, indicating a large size dependence of f. Additionally, the deformation of the iron materials in high-velocity shots was found to be most significant when the

  15. Tectonic lineations and frictional faulting on a relatively simple body (Ariel)

    NASA Astrophysics Data System (ADS)

    Nyffenegger, Paul; Davis, Dan M.; Consolmagno, Guy J.

    1997-09-01

    Anderson's model of faulting and the Mohr-Coulomb failure criterion can predict the orientations of faults generated in laboratory triaxial compression experiments, but do a much poorer job of explaining the orientations of outcrop- and map-scale faults on Earth. This failure may be due to the structural complexity of the Earth's lithosphere, the failure of laboratory experiments to predict accurately the strength of natural faults, or some fundamental flaw in the model. A simpler environment, such as the lithosphere of an icy satellite, allows us to test whether this model can succeed in less complex settings. A mathematical method is developed to analyze patterns in fracture orientations that can be applied to fractures in the lithospheres of icy satellites. In a initial test of the method, more than 300 lineations on Uranus' satellite Ariel are examined. A nonrandom pattern of lineations is looked for, and the source of the stresses that caused those features and the strength of the material in which they occur are constrained. It is impossible to observe directly the slip on these fractures. However, their orientations are clearly nonrandom and appear to be consistent with Andersonian strike-slip faulting in a relatively weak frictional lithosphere during one or more episodes of tidal flexing.

  16. Earth Observing System (EOS) Aqua Launch and Early Mission Attitude Support Experiences

    NASA Technical Reports Server (NTRS)

    Tracewell, D.; Glickman, J.; Hashmall, J.; Natanson, G.; Sedlak, J.

    2003-01-01

    The Earth Observing System (EOS) Aqua satellite was successfully launched on May 4,2002. Aqua is the second in the series of EOS satellites. EOS is part of NASA s Earth Science Enterprise Program, whose goals are to advance the scientific understanding of the Earth system. Aqua is a three-axis stabilized, Earth-pointing spacecraft in a nearly circular, sun-synchronous orbit at an altitude of 705 km. The Goddard Space Flight Center (GSFC) Flight Dynamics attitude team supported all phases of the launch and early mission. This paper presents the main results and lessons learned during this period, including: real-time attitude mode transition support, sensor calibration, onboard computer attitude validation, response to spacecraft emergencies, postlaunch attitude analyses, and anomaly resolution. In particular, Flight Dynamics support proved to be invaluable for successful Earth acquisition, fine-point mode transition, and recognition and correction of several anomalies, including support for the resolution of problems observed with the MODIS instrument.

  17. Piloting a Global Collaborative Experiment to Determine your Place on the Planet and the Circumference of the Earth

    NASA Astrophysics Data System (ADS)

    Solie, D. J.; Paniwozik, R. L.; Wallace, P.

    2012-12-01

    As part of the laboratory component in Bush Physics for the 21st Century, a distance delivered physics course geared toward rural and Indigenous students in Alaska, students determine their village location on earth from simple sun angle measurements at local-noon during the spring equinox. Students measure the length of the sun shadow cast by a rod mounted on a horizontal surface, over short time intervals on or near the spring equinox during mid-day. Local-noon occurs when the sun is the highest and its corresponding shadow the shortest. Local noon, when expressed in Universal Time, can be directly converted to the local longitude in degrees. Local latitude in degrees, is obtained from the local-noon shadow length on the spring equinox and simple trigonometry. As an added bonus, using data from different sites, students can collaborate to approximate the circumference of the earth from their measurements. In the spirit of Eratosthenes, students envision an earth-sized pie wedge cut from a polar great-circle where the curve of the wedge on the earth's surface is the North-South distance between two often road-less sites (determined using Google Earth, a map or a globe), and the angle of the wedge is the difference between the site latitudes. The earth's circumference is calculated from this wedge. In 2012 with the aim of including Indigenous groups from other regions of the planet, we expanded this experiment to include teams from Japan, Puerto Rico, American Samoa, and New Zealand. We present our results from this pilot year.

  18. Active faults in Africa: a review

    NASA Astrophysics Data System (ADS)

    Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.

    2004-03-01

    The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.

  19. The effect of the low Earth orbit environment on space solar cells: Results of the Advanced Photovoltaic Experiment (S0014)

    NASA Technical Reports Server (NTRS)

    Brinker, David J.; Hickey, John R.; Scheiman, David A.

    1993-01-01

    The results of post-flight performance testing of the solar cells flown on the Advanced Photovoltaic Experiment are reported. Comparison of post-flight current-voltage characteristics with similar pre-flight data revealed little or no change in solar cell conversion efficiency, confirming the reliability and endurance of space photovoltaic cells. This finding is in agreement with the lack of significant physical changes in the solar cells despite nearly six years in the low Earth orbit environment.

  20. The kickstart of the age of the Earth race: revisiting the experiment of the Comte de Buffon at school

    NASA Astrophysics Data System (ADS)

    Pincelli, M. M.; Prat, M. R.; Lescano, G. M.; Formichella, M. del C.; Brustle, M.; Otranto, S.

    2018-01-01

    In this work, the first experiment ever done to determine the age of the Earth is revisited. The benefits of its application at primary and secondary school levels are presented and discussed. In particular, emphasis is placed on the advantage of facing students with the challenges that scientists have had to overcome during the past three centuries to reach our present knowledge in contrast to the mere transmission of the latest facts.

  1. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  2. The Global Non-Holonomity of the Rotating Space of the Earth Affects Hafele-Keating Experiment

    NASA Astrophysics Data System (ADS)

    Rabounski, Dmitri; Borissova, Larissa

    2013-04-01

    The deviation of time registered in the ``around-the-world clocks experiment'' (Hafele J. and Keating R., Science, 14 July 1972, 166-170) is originally explained due to: 1) General Relativity (gravitation is lower at the flying airplane's altitude); 2) Special Relativity (the airplane's speed and the Earth's rotation). However as was shown in the 1940's by Schouten and then Zelmanov, if the observer cannot be moved to the rotation-free frame, the space rotation is a non-vanishing effect of General Relativity, and is due to the non-holonomity of space (the non-orthogonality of the three-space to the lines of time). This is the case of Hafele-Keating experiment (the Earth's rotation cannot be stopped). We thus constructed the metric of the real space of the Earth which bears the gravitational field and rotation. We then proved that this metric satisfies Einstein's equations. Finally, an exact formula is deduced for Hafele-Keating experiment. Despite a hundred nanoseconds of the time correction, and the use of the GPS navigation, the obtained result is useful in the case where is no the GPS connexion, in a long-term submarine travel for instance.

  3. Cross-Cultural Field Experiences in Earth and Social Sciences for Chilean and American Graduate Students

    NASA Astrophysics Data System (ADS)

    Duffin, J.; Russell, M.; Fuentes, B.; Riffo, A.; Link, T. E.; Caamaño, D.; King, R.; Barra, R.

    2017-12-01

    the plans of both countries. This project is an example of the value of supplemental field experiences in graduate education, as it stimulated conversations on earth science subjects that transcend disciplines, cultures, and scales, and provided students a practicum for applying interdisciplinary research techniques.

  4. GeoBrain for Facilitating Earth Science Education in Higher-Education Institutes--Experience and Lessons-learned

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2007-12-01

    Data integration and analysis are the foundation for the scientific investigation in Earth science. In the past several decades, huge amounts of Earth science data have been collected mainly through remote sensing. Those data have become the treasure for Earth science research. Training students how to discover and use the huge volume of Earth science data in research become one of the most important trainings for making a student a qualified scientist. Being developed by a NASA funded project, the GeoBrain system has adopted and implemented the latest Web services and knowledge management technologies for providing innovative methods in publishing, accessing, visualizing, and analyzing geospatial data and in building/sharing geoscience knowledge. It provides a data-rich online learning and research environment enabled by wealthy data and information available at NASA Earth Observing System (EOS) Data and Information System (EOSDIS). Students, faculty members, and researchers from institutes worldwide can easily access, analyze, and model with the huge amount of NASA EOS data just like they possess such vast resources locally at their desktops. Although still in development, the GeoBrain system has been operational since 2005. A number of education materials have been developed for facilitating the use of GeoBrain as a powerful education tool for Earth science education at both undergraduate and graduate levels. Thousands of online higher-education users worldwide have used GeoBrain services. A number of faculty members in multiple universities have been funded as GeoBrain education partners to explore the use of GeoBrain in the classroom teaching and student research. By summarizing and analyzing the feedbacks from the online users and the education partners, this presentation presents the user experiences on using GeoBrain in Earth science teaching and research. The feedbacks on classroom use of GeoBrain have demonstrated that GeoBrain is very useful for

  5. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  6. 8 years of experience in international, interdisciplinary and structured doctoral training in Earth system modelling

    NASA Astrophysics Data System (ADS)

    Weitz, Antje; Stevens, Bjorn; Marotzke, Jochem

    2010-05-01

    The mission of the International Max Planck Research School on Earth System Modelling (IMPRS-ESM) is to provide a high quality, modern and structured graduate education to students pursuing a doctoral degree in Earth system modelling. In so doing, the IMPRS-ESM also strives to advance the emerging discipline (or cross-discipline) of Earth system modelling; to provide a framework for attracting the most talented and creative young women and men from around the world to pursue their doctoral education in Germany; to provide advanced as well as specialized academic training and scientific guidance to doctoral students; to encourage academic networking and publication of research results; to better integrate doctoral research at the Max Planck Institute for Meteorology (MPI-M) with education and research at the University of Hamburg and other cooperating institutions. Core elements are rigorous selection of doctoral students, effective academic supervision, advanced academic training opportunities and interdisciplinary communication as well as administrative support. IMPRS-ESM graduates have been recognized with a variety of awards. 85% of our alumni continue a career in research. In this presentation we review the challenges for an interdisciplinary PhD program in Earth system sciences and the types of routines we have implemented to surmount them as well as key elements that we believe contribute to the success of our doctoral program.

  7. Nonlinear softening of unconsolidated granular earth materials

    NASA Astrophysics Data System (ADS)

    Lieou, Charles K. C.; Daub, Eric G.; Guyer, Robert A.; Johnson, Paul A.

    2017-09-01

    Unconsolidated granular earth materials exhibit softening behavior due to external perturbations such as seismic waves, namely, the wave speed and elastic modulus decrease upon increasing the strain amplitude above dynamics strains of about 10-6 under near-surface conditions. In this letter, we describe a theoretical model for such behavior. The model is based on the idea that shear transformation zones—clusters of grains that are loose and susceptible to contact changes, particle displacement, and rearrangement—are responsible for plastic deformation and softening of the material. We apply the theory to experiments on simulated fault gouge composed of glass beads and demonstrate that the theory predicts nonlinear resonance shifts, reduction of the P wave modulus, and attenuation, in agreement with experiments. The theory thus offers insights on the nature of nonlinear elastic properties of a granular medium and potentially into phenomena such as triggering on earthquake faults.

  8. Soil and crop management experiments in the Laboratory Biosphere: an analogue system for the Mars on Earth(R) facility.

    PubMed

    Silverstone, S; Nelson, M; Alling, A; Allen, J P

    2005-01-01

    During the years 2002 and 2003, three closed system experiments were carried out in the "Laboratory Biosphere" facility located in Santa Fe, New Mexico. The program involved experimentation of "Hoyt" Soy Beans, (experiment #1) USU Apogee Wheat (experiment #2) and TU-82-155 sweet potato (experiment #3) using a 5.37 m2 soil planting bed which was 30 cm deep. The soil texture, 40% clay, 31% sand and 28% silt (a clay loam), was collected from an organic farm in New Mexico to avoid chemical residues. Soil management practices involved minimal tillage, mulching, returning crop residues to the soil after each experiment and increasing soil biota by introducing worms, soil bacteria and mycorrhizae fungi. High soil pH of the original soil appeared to be a factor affecting the first two experiments. Hence, between experiments #2 and #3, the top 15 cm of the soil was amended using a mix of peat moss, green sand, humates and pumice to improve soil texture, lower soil pH and increase nutrient availability. This resulted in lowering the initial pH of 8.0-6.7 at the start of experiment #3. At the end of the experiment, the pH was 7.6. Soil nitrogen and phosphorus has been adequate, but some chlorosis was evident in the first two experiments. Aphid infestation was the only crop pest problem during the three experiments and was handled using an introduction of Hyppodamia convergens. Experimentation showed there were environmental differences even in this 1200 cubic foot ecological system facility, such as temperature and humidity gradients because of ventilation and airflow patterns which resulted in consequent variations in plant growth and yield. Additional humidifiers were added to counteract low humidity and helped optimize conditions for the sweet potato experiment. The experience and information gained from these experiments are being applied to the future design of the Mars On Earth(R) facility (Silverstone et al., Development and research program for a soil

  9. Soil and crop management experiments in the Laboratory Biosphere: An analogue system for the Mars on Earth ® facility

    NASA Astrophysics Data System (ADS)

    Silverstone, S.; Nelson, M.; Alling, A.; Allen, J. P.

    During the years 2002 and 2003, three closed system experiments were carried out in the "Laboratory Biosphere" facility located in Santa Fe, New Mexico. The program involved experimentation of "Hoyt" Soy Beans, (experiment #1) USU Apogee Wheat (experiment #2) and TU-82-155 sweet potato (experiment #3) using a 5.37 m 2 soil planting bed which was 30 cm deep. The soil texture, 40% clay, 31% sand and 28% silt (a clay loam), was collected from an organic farm in New Mexico to avoid chemical residues. Soil management practices involved minimal tillage, mulching, returning crop residues to the soil after each experiment and increasing soil biota by introducing worms, soil bacteria and mycorrhizae fungi. High soil pH of the original soil appeared to be a factor affecting the first two experiments. Hence, between experiments #2 and #3, the top 15 cm of the soil was amended using a mix of peat moss, green sand, humates and pumice to improve soil texture, lower soil pH and increase nutrient availability. This resulted in lowering the initial pH of 8.0-6.7 at the start of experiment #3. At the end of the experiment, the pH was 7.6. Soil nitrogen and phosphorus has been adequate, but some chlorosis was evident in the first two experiments. Aphid infestation was the only crop pest problem during the three experiments and was handled using an introduction of Hyppodamia convergens. Experimentation showed there were environmental differences even in this 1200 cubic foot ecological system facility, such as temperature and humidity gradients because of ventilation and airflow patterns which resulted in consequent variations in plant growth and yield. Additional humidifiers were added to counteract low humidity and helped optimize conditions for the sweet potato experiment. The experience and information gained from these experiments are being applied to the future design of the Mars On Earth ® facility (Silverstone et al., Development and research program for a soil

  10. Creating the Public Connection: Interactive Experiences with Real-Time Earth and Space Science Data

    NASA Technical Reports Server (NTRS)

    Reiff, Patricia H.; Ledley, Tamara S.; Sumners, Carolyn; Wyatt, Ryan

    1995-01-01

    The Houston Museum of Natural Sciences is less than two miles from Rice University, a major hub on the Internet. This project links these two institutions so that NASA real-time data and imagery can flow via Rice to the Museum where it reaches the public in the form of planetarium programs, computer based interactive kiosks, and space and Earth science problem solving simulation. Through this program at least 200,000 visitors annually (including every 4th and 7th grader in the Houston Independent School District) will have direct exposure to the Earth and space research being conducted by NASA and available over the Internet. Each information conduit established between Rice University and the Houston Museum of Natural Science will become a model for public information dissemination that can be replicated nationally in museums, planetariums, Challenger Centers, and schools.

  11. Geodesy and gravity experiment in earth orbit using a superconducting gravity gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, H. J.

    1985-01-01

    A superconducting gravity gradiometer is under development with NASA support for space application. It is planned that a sensitive three-axis gravity gradiometer will be flown in a low-altitude (about 160 km) polar orbit in the 1990's for the purpose of obtaining a high-resolution gravity map of the earth. The large twice-an-orbit term in the harmonic expansion of gravity coming from the oblateness of the earth can be analyzed to obtain a precision test of the inverse square law at a distance of 100-1000 km. In this paper, the design, operating principle, and performance of the superconducting gravity gradiometer are described. The concept of a gravity-gradiometer mission (GGM), which is in an initial stage of development is discussed. In particular, requirements that such a mission imposes on the design of the cryogenic spacecraft will be addressed.

  12. TRAPPED PROTON FLUXES AT LOW EARTH ORBITS MEASURED BY THE PAMELA EXPERIMENT

    SciT

    Adriani, O.; Bongi, M.; Barbarino, G. C.

    2015-01-20

    We report an accurate measurement of the geomagnetically trapped proton fluxes for kinetic energy above ∼70 MeV performed by the PAMELA mission at low Earth orbits (350 ÷ 610 km). Data were analyzed in the frame of the adiabatic theory of charged particle motion in the geomagnetic field. Flux properties were investigated in detail, providing a full characterization of the particle radiation in the South Atlantic Anomaly region, including locations, energy spectra, and pitch angle distributions. PAMELA results significantly improve the description of the Earth's radiation environment at low altitudes, placing important constraints on the trapping and interaction processes, and can be usedmore » to validate current trapped particle radiation models.« less

  13. Total solar irradiance values determined using Earth Radiation Budget Experiment (ERBE) radiometers

    NASA Technical Reports Server (NTRS)

    Lee, Robert B., III; Gibson, Michael A.; Natarajan, Sudha

    1988-01-01

    During the October 1984 through January 1988 period, the ERBE solar monitors on the NASA Earth Radiation Satellite and on the National Oceanic and Atmospheric Administration NOAA 9 and NOAA 10 spacecraft were used to obtain mean total solar irradiance values of 1365, 1365, and 1363 W/sq m, respectively. Secular variations in the solar irradiance have been observed, and they appear to be correlated with solar activity.

  14. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  15. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  16. IEDA Integrated Services: Improving the User Experience for Interdisciplinary Earth Science Research

    NASA Astrophysics Data System (ADS)

    Carter-Orlando, M.; Ferrini, V. L.; Lehnert, K.; Carbotte, S. M.; Richard, S. M.; Morton, J. J.; Shane, N.; Ash, J.; Song, L.

    2017-12-01

    The Interdisciplinary Earth Data Alliance (IEDA) is an NSF-funded data facility that provides data tools and services to support the Ocean, Earth, and Polar Sciences. IEDA systems, developed and maintained primarily by the IEDA partners EarthChem and the Marine Geoscience Data System (MGDS), serve as primary community data collections for global geochemistry and marine geoscience research and support the preservation, discovery, retrieval, and analysis of a wide range of observational field and analytical data types. Individual IEDA systems originated independently and differ from one another in purpose and scope. Some IEDA systems are data repositories (EarthChem Library, Marine Geo-Digital Library), while others are actively maintained data syntheses (GMRT, PetDB, EarthChem Portal, Geochron). Still others are data visualization and analysis tools (GeoMapApp). Although the diversity of IEDA's data types, tools, and services is a major strength and of high value to investigators, it can be a source of confusion. And while much of the data managed in IEDA systems is appropriate for interdisciplinary research, investigators may be unfamiliar with the user interfaces and services of each system, especially if it is not in their primary discipline. This presentation will highlight new ways in which IEDA helps researchers to more efficiently navigate data submission and data access. It will also discuss how IEDA promotes discovery and access within and across its systems, to serve interdisciplinary science while also remaining aware of and responsive to the more specific needs of its disciplinary user communities. The IEDA Data Submission Hub (DaSH), which is currently under development, aspires to streamline the submission process for both the science data contributor and for the repository data curator. Instead of users deciding a priori, which system they should contribute their data to, the DaSH helps route them to the appropriate repository based primarily on data

  17. A Geograns update. New experiences to teach earth sciences to students older than 55

    NASA Astrophysics Data System (ADS)

    Cerdà, A.; Pinazo, S.

    2009-04-01

    How to teach earth science to students that have access to the university after the age of 55 is a challenge due to the different background of the students. They ranged from those with only basic education (sometimes they finished school at the age of 9) to well educate students such as university professors, physicians or engineers. Students older than 55 are enrolled in what is called the university programme NauGran project at the University of Valencia. They follow diverse topics, from health science to Arts. Since 2006 the Department of Geography and the NauGran project developed the Club for Geographers and Walkers called Geograns. The objective is to teach Earth Science in the field as a strategy to improve the knowledge of the students with a direct contact with the territory. This initiative reached a successful contribution by the students, with 70 students registered. The successful strategy we have developed since then is to base our teaching on field work. Every lecture is related to some visits to the field. A pre-excursion lecture introduces the key questions of the study site (hydrology, geology, botany, geomorphology…). During the field work we review all the topics and the students are encouraged to ask and discuss any of the topics studied. Finally, a post-excursion lecture is given to review the acquired knowledge. During the last academic year 2007-2008 the excursion focussed on: (i) energy sources: problems and solutions, with visit to nuclear, wind and hydraulic power stations; (i) human disturbances and humankind as landscaper, with visits to wetlands, river gorges and Iberian settlements; and (iii) human activities and economical resources, with visits to vineyards and wineries and orange fields devoted to organic farming. This is being a positive strategy to teach Earth Science to a wide and heterogeneous group of students, as they improve their knowledge with a direct contact with the landscape, other colleagues and teachers in the

  18. Oceanic transform faults: how and why do they form? (Invited)

    NASA Astrophysics Data System (ADS)

    Gerya, T.

    2013-12-01

    Oceanic transform faults at mid-ocean ridges are often considered to be the direct product of plate breakup process (cf. review by Gerya, 2012). In contrast, recent 3D thermomechanical numerical models suggest that transform faults are plate growth structures, which develop gradually on a timescale of few millions years (Gerya, 2010, 2013a,b). Four subsequent stages are predicted for the transition from rifting to spreading (Gerya, 2013b): (1) crustal rifting, (2) multiple spreading centers nucleation and propagation, (3) proto-transform faults initiation and rotation and (4) mature ridge-transform spreading. Geometry of the mature ridge-transform system is governed by geometrical requirements for simultaneous accretion and displacement of new plate material within two offset spreading centers connected by a sustaining rheologically weak transform fault. According to these requirements, the characteristic spreading-parallel orientation of oceanic transform faults is the only thermomechanically consistent steady state orientation. Comparison of modeling results with the Woodlark Basin suggests that the development of this incipient spreading region (Taylor et al., 2009) closely matches numerical predictions (Gerya, 2013b). Model reproduces well characteristic 'rounded' contours of the spreading centers as well as the presence of a remnant of the broken continental crustal bridge observed in the Woodlark basin. Similarly to the model, the Moresby (proto)transform terminates in the oceanic rather than in the continental crust. Transform margins and truncated tip of one spreading center present in the model are documented in nature. In addition, numerical experiments suggest that transform faults can develop gradually at mature linear mid-ocean ridges as the result of dynamical instability (Gerya, 2010). Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become

  19. Advanced Diagnostic System on Earth Observing One

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.; Tran, Daniel; Shulman, Seth

    2004-01-01

    In this infusion experiment, the Livingstone 2 (L2) model-based diagnosis engine, developed by the Computational Sciences division at NASA Ames Research Center, has been uploaded to the Earth Observing One (EO-1) satellite. L2 is integrated with the Autonomous Sciencecraft Experiment (ASE) which provides an on-board planning capability and a software bridge to the spacecraft's 1773 data bus. Using a model of the spacecraft subsystems, L2 predicts nominal state transitions initiated by control commands, monitors the spacecraft sensors, and, in the case of failure, isolates the fault based on the discrepant observations. Fault detection and isolation is done by determining a set of component modes, including most likely failures, which satisfy the current observations. All mode transitions and diagnoses are telemetered to the ground for analysis. The initial L2 model is scoped to EO-1's imaging instruments and solid state recorder. Diagnostic scenarios for EO-1's nominal imaging timeline are demonstrated by injecting simulated faults on-board the spacecraft. The solid state recorder stores the science images and also hosts: the experiment software. The main objective of the experiment is to mature the L2 technology to Technology Readiness Level (TRL) 7. Experiment results are presented, as well as a discussion of the challenging technical issues encountered. Future extensions may explore coordination with the planner, and model-based ground operations.

  20. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  1. The PROCESS experiment: an astrochemistry laboratory for solid and gaseous organic samples in low-earth orbit.

    PubMed

    Cottin, Hervé; Guan, Yuan Yong; Noblet, Audrey; Poch, Olivier; Saiagh, Kafila; Cloix, Mégane; Macari, Frédérique; Jérome, Murielle; Coll, Patrice; Raulin, François; Stalport, Fabien; Szopa, Cyril; Bertrand, Marylène; Chabin, Annie; Westall, Frances; Chaput, Didier; Demets, René; Brack, André

    2012-05-01

    The PROCESS (PRebiotic Organic ChEmistry on the Space Station) experiment was part of the EXPOSE-E payload outside the European Columbus module of the International Space Station from February 2008 to August 2009. During this interval, organic samples were exposed to space conditions to simulate their evolution in various astrophysical environments. The samples used represent organic species related to the evolution of organic matter on the small bodies of the Solar System (carbonaceous asteroids and comets), the photolysis of methane in the atmosphere of Titan, and the search for organic matter at the surface of Mars. This paper describes the hardware developed for this experiment as well as the results for the glycine solid-phase samples and the gas-phase samples that were used with regard to the atmosphere of Titan. Lessons learned from this experiment are also presented for future low-Earth orbit astrochemistry investigations.

  2. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  3. Landing in the future: Biological experiments on Earth and in space orbit

    NASA Astrophysics Data System (ADS)

    Pokrovskiy, A.

    1980-09-01

    The development of an Earth biosatellite to duplicate the parameters of pressure, temperature, humidity and others in a space environment onboard Cosmos-1129 is discussed. Effects of a space environment on fruit flies, dogs, laboratory rats in procreation, behavior, stress, biorhythm, body composition, gravitation preference, and cell cultures are examined. The space environment for agricultural products is also studied. The effects of heavy nuclei of galactic space radiation on biological objects inside and outside the satellite is studied, and methods of electrostatic protection are developed.

  4. Landing in the future: Biological experiments on Earth and in space orbit

    NASA Technical Reports Server (NTRS)

    Pokrovskiy, A.

    1980-01-01

    The development of an Earth biosatellite to duplicate the parameters of pressure, temperature, humidity and others in a space environment onboard Cosmos-1129 is discussed. Effects of a space environment on fruit flies, dogs, laboratory rats in procreation, behavior, stress, biorhythm, body composition, gravitation preference, and cell cultures are examined. The space environment for agricultural products is also studied. The effects of heavy nuclei of galactic space radiation on biological objects inside and outside the satellite is studied, and methods of electrostatic protection are developed.

  5. Photographic coronagraph, Skylab particulate experiment T025. [earth atmospheric pollution and Kohoutek Comet monitoring

    NASA Technical Reports Server (NTRS)

    Giovane, F.; Schuerman, D. W.; Greenberg, J. M.

    1977-01-01

    A photographic coronagraph, built to monitor Skylab's extravehicular contamination, is described. This versatile instrument was used to observe the earth's vertical aerosol distribution and Comet Kohoutek (1973f) near perihelion. Although originally designed for deployment from the solar airlock, the instrument was modified for EVA operation when the airlock was rendered unusable. The results of the observations made in four EVA's were almost completely ruined by the failure of a Skylab operational camera used with the coronagraph. Nevertheless, an aerosol layer at 48 km was discovered in the southern hemisphere from the few useful photographs.

  6. Differential Fault Analysis on CLEFIA

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  7. Faults Get Colder Through Transient Granular Vortices

    NASA Astrophysics Data System (ADS)

    Einav, I.; Rognon, P.; Miller, T.; Sulem, J.

    2018-03-01

    Fault temperatures govern their weakening and control the dynamics of earthquakes during slip. Despite predictions of significant temperature rise within fault gouges during earthquake events, observations of frictional melting zones along exhumed faults are relatively rare. Could there be a heat transfer mechanism, previously not considered, that results in ubiquitously colder faults during earthquakes? We demonstrate that the remarkable, previously neglected mechanism of heat transfer through transient granular vortices may be at the core of this. We present and analyze results from perpetual simple shear experiments on a system of granular disks with which we are able to quantify the sizes and lifetimes of granular vortices within fault gouges during earthquakes. We then develop a formula that captures the contribution these vortices have on heat transfer. Using this formula, we show that crustal faults such as those in the San Andreas system may experience a maximum temperature rise 5 to 10 times lower than previously thought.

  8. Watching Faults Grow in Sand

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.

    2015-12-01

    Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.

  9. Ultra-thin clay layers facilitate seismic slip in carbonate faults.

    PubMed

    Smeraglia, Luca; Billi, Andrea; Carminati, Eugenio; Cavallo, Andrea; Di Toro, Giulio; Spagnuolo, Elena; Zorzi, Federico

    2017-04-06

    Many earthquakes propagate up to the Earth's surface producing surface ruptures. Seismic slip propagation is facilitated by along-fault low dynamic frictional resistance, which is controlled by a number of physico-chemical lubrication mechanisms. In particular, rotary shear experiments conducted at seismic slip rates (1 ms -1 ) show that phyllosilicates can facilitate co-seismic slip along faults during earthquakes. This evidence is crucial for hazard assessment along oceanic subduction zones, where pelagic clays participate in seismic slip propagation. Conversely, the reason why, in continental domains, co-seismic slip along faults can propagate up to the Earth's surface is still poorly understood. We document the occurrence of micrometer-thick phyllosilicate-bearing layers along a carbonate-hosted seismogenic extensional fault in the central Apennines, Italy. Using friction experiments, we demonstrate that, at seismic slip rates (1 ms -1 ), similar calcite gouges with pre-existing phyllosilicate-bearing (clay content ≤3 wt.%) micro-layers weaken faster than calcite gouges or mixed calcite-phyllosilicate gouges. We thus propose that, within calcite gouge, ultra-low clay content (≤3 wt.%) localized along micrometer-thick layers can facilitate seismic slip propagation during earthquakes in continental domains, possibly enhancing surface displacement.

  10. ISO 19115 Experiences in NASA's Earth Observing System (EOS) ClearingHOuse (ECHO)

    NASA Astrophysics Data System (ADS)

    Cechini, M. F.; Mitchell, A.

    2011-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing earth science data. As science research and the gathered data increases in complexity, so does the complexity and importance of descriptive metadata. To meet these growing needs, the metadata models required utilize richer and more mature metadata attributes. Categorizing, standardizing, and promulgating these metadata models to a politically, geographically, and scientifically diverse community is a difficult process. An integral component of metadata management within NASA's Earth Observing System Data and Information System (EOSDIS) is the Earth Observing System (EOS) ClearingHOuse (ECHO). ECHO is the core metadata repository for the EOSDIS data centers providing a centralized mechanism for metadata and data discovery and retrieval. ECHO has undertaken an internal restructuring to meet the changing needs of scientists, the consistent advancement in technology, and the advent of new standards such as ISO 19115. These improvements were based on the following tenets for data discovery and retrieval: + There exists a set of 'core' metadata fields recommended for data discovery. + There exists a set of users who will require the entire metadata record for advanced analysis. + There exists a set of users who will require a 'core' set metadata fields for discovery only. + There will never be a cessation of new formats or a total retirement of all old formats. + Users should be presented metadata in a consistent format of their choosing. In order to address the previously listed items, ECHO's new metadata processing paradigm utilizes the following approach: + Identify a cross-format set of 'core' metadata fields necessary for discovery. + Implement format-specific indexers to extract the 'core' metadata fields into an optimized query capability. + Archive the original metadata in its entirety for presentation to users requiring the full record. + Provide on-demand translation of

  11. From Earth to Heaven: Using "Newton's Cannon" Thought Experiment for Teaching Satellite Physics

    ERIC Educational Resources Information Center

    Velentzas, Athanasios; Halkia, Krystallia

    2013-01-01

    Thought Experiments are powerful tools in both scientific thinking and in the teaching of science. In this study, the historical Thought Experiment (TE) "Newton's Cannon" was used as a tool to teach concepts relating to the motion of satellites to students at upper secondary level. The research instruments were: (a) a…

  12. Experience in operating earth dams of the NIVA cascade of the Kola Regional Power administration constructed in 1930-1960

    SciT

    Nosova, O.N.; Margolina, O.G.; Sergeeva, N.S.

    1995-08-01

    This article discusses Russian experiences in monitoring earth-filled dams of the Niva region. These are low and medium head facilities in operation from 30 to 60 years. As shown by the experiences of long-term operation of earth structures in this area and on embankments being constructed by the method of dumping soil into water, it is necessary to impose more stringent requirements with respect to determining the steepness of these slopes to increase their stability, as is done when the structures are constructed dry. To organize successful monitoring of seepage processes in the investigated structures having substantial anisotropy of themore » soil, special recommendations of the disposition of piezometers under such specific conditions should be worked out. Recommendations on the disposition of piezometers under conditions of a noticeable effect of the groundwater regime of the surrounding territory on the seepage regime of the hydro development should be worked out accordingly. Since the calculations made in the work, as a result of which instability of many slopes was detected, are not always confirmed by practice, it is advisable to correct the method of such calculations with consideration of the characteristics of the formation of the seepage flow in the downstream shoulder of dams with pronounced anisotropy of the soil.« less

  13. Preliminary Results of NASA's First Autonomous Formation Flying Experiment: Earth Observing-1 (EO-1)

    NASA Technical Reports Server (NTRS)

    Folta, David; Hawkins, Albin

    2001-01-01

    NASA's first autonomous formation flying mission is completing a primary goal of demonstrating an advanced technology called enhanced formation flying. To enable this technology, the Guidance, Navigation, and Control center at the Goddard Space Flight Center has implemented an autonomous universal three-axis formation flying algorithm in executive flight code onboard the New Millennium Program's (NMP) Earth Observing-1 (EO-1) spacecraft. This paper describes the mathematical background of the autonomous formation flying algorithm and the onboard design and presents the preliminary validation results of this unique system. Results from functionality assessment and autonomous maneuver control are presented as comparisons between the onboard EO-1 operational autonomous control system called AutoCon(tm), its ground-based predecessor, and a stand-alone algorithm.

  14. Results of NASA's First Autonomous Formation Flying Experiment: Earth Observing-1 (EO-1)

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Hawkins, Albin; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    NASA's first autonomous formation flying mission completed its primary goal of demonstrating an advanced technology called enhanced formation flying. To enable this technology, the Guidance, Navigation, and Control center at the Goddard Space Flight Center (GSFC) implemented a universal 3-axis formation flying algorithm in an autonomous executive flight code onboard the New Millennium Program's (NMP) Earth Observing-1 (EO-1) spacecraft. This paper describes the mathematical background of the autonomous formation flying algorithm and the onboard flight design and presents the validation results of this unique system. Results from functionality assessment through fully autonomous maneuver control are presented as comparisons between the onboard EO-1 operational autonomous control system called AutoCon(tm), its ground-based predecessor, and a standalone algorithm.

  15. Convection experiments in a centrifuge and the generation of plumes in a very viscous fluid. [for earth mantle models

    NASA Technical Reports Server (NTRS)

    Nataf, H.-C.; Hager, B. H.; Scott, R. F.

    1984-01-01

    In this paper, experiments are described for which inertial effects are negligible. A small aspect-ratio tank filled with a very viscous fluid (Pr = 10 to the 6th) is used to observe the behavior of convection for Rayleigh numbers up to 6.3 x 10 to the 5th. These high values are reached by conducting the experiment in a centrifuge which provides a 130-fold increase in apparent gravity. Rotational effects are small, but cannot be totally dismissed. In this geometry, thermal boundary layer instabilities are indeed observed, and are found to be very similar to their lower Prandtl number counterparts. It is tentatively concluded that once given a certain degree of 'vulnerability' convection can develop 'plume' like instabilities, even when the Prandtl number is infinite. The concept is applied to the earth's mantle and it is speculated that 'plumes' could well be the dominant mode of small-scale convection under the lithospheric plates.

  16. The PROCESS experiment: amino and carboxylic acids under Mars-like surface UV radiation conditions in low-earth orbit.

    PubMed

    Noblet, Audrey; Stalport, Fabien; Guan, Yuan Yong; Poch, Olivier; Coll, Patrice; Szopa, Cyril; Cloix, Mégane; Macari, Frédérique; Raulin, Francois; Chaput, Didier; Cottin, Hervé

    2012-05-01

    The search for organic molecules at the surface of Mars is a top priority of the next Mars exploration space missions: Mars Science Laboratory (NASA) and ExoMars (ESA). The detection of organic matter could provide information about the presence of a prebiotic chemistry or even biological activity on this planet. Therefore, a key step in interpretation of future data collected by these missions is to understand the preservation of organic matter in the martian environment. Several laboratory experiments have been devoted to quantifying and qualifying the evolution of organic molecules under simulated environmental conditions of Mars. However, these laboratory simulations are limited, and one major constraint is the reproduction of the UV spectrum that reaches the surface of Mars. As part of the PROCESS experiment of the European EXPOSE-E mission on board the International Space Station, a study was performed on the photodegradation of organics under filtered extraterrestrial solar electromagnetic radiation that mimics Mars-like surface UV radiation conditions. Glycine, serine, phthalic acid, phthalic acid in the presence of a mineral phase, and mellitic acid were exposed to these conditions for 1.5 years, and their evolution was determined by Fourier transform infrared spectroscopy after their retrieval. The results were compared with data from laboratory experiments. A 1.5-year exposure to Mars-like surface UV radiation conditions in space resulted in complete degradation of the organic compounds. Half-lives between 50 and 150 h for martian surface conditions were calculated from both laboratory and low-Earth orbit experiments. The results highlight that none of those organics are stable under low-Earth orbit solar UV radiation conditions.

  17. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

  18. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  19. Raduga experiment: Multizonal photographing the Earth from the Soyuz-22 spacecraft

    NASA Technical Reports Server (NTRS)

    Ziman, Y.; Chesnokov, Y.; Dunayev, B.; Aksenov, V.; Bykovskiy, V.; Ioaskhim, R.; Myuller, K.; Choppe, V.; Volter, V.

    1980-01-01

    The main results of the scientific research and 'Raduga' experiment are reported. Technical parameters are presented for the MKF-6 camera and the MSP-4 projector. Characteristics of the obtained materials and certain results of their processing are reported.

  20. Low Earth Orbital Mission Aboard the Space Test Experiments Platform (STEP-3)

    NASA Technical Reports Server (NTRS)

    Brinza, David E.

    1992-01-01

    A discussion of the Space Active Modular Materials Experiments (SAMMES) is presented in vugraph form. The discussion is divided into three sections: (1) a description of SAMMES; (2) a SAMMES/STEP-3 mission overview; and (3) SAMMES follow on efforts. The SAMMES/STEP-3 mission objectives are as follows: assess LEO space environmental effects on SDIO materials; quantify orbital and local environments; and demonstrate the modular experiment concept.

  1. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  2. Selenium Sequestration in a Cationic Layered Rare Earth Hydroxide: A Combined Batch Experiments and EXAFS Investigation.

    PubMed

    Zhu, Lin; Zhang, Linjuan; Li, Jie; Zhang, Duo; Chen, Lanhua; Sheng, Daopeng; Yang, Shitong; Xiao, Chengliang; Wang, Jianqiang; Chai, Zhifang; Albrecht-Schmitt, Thomas E; Wang, Shuao

    2017-08-01

    Selenium is of great concern owing to its acutely toxic characteristic at elevated dosage and the long-term radiotoxicity of 79 Se. The contents of selenium in industrial wastewater, agricultural runoff, and drinking water have to be constrained to a value of 50 μg/L as the maximum concentration limit. We reported here the selenium uptake using a structurally well-defined cationic layered rare earth hydroxide, Y 2 (OH) 5 Cl·1.5H 2 O. The sorption kinetics, isotherms, selectivity, and desorption of selenite and selenate on Y 2 (OH) 5 Cl·1.5H 2 O at pH 7 and 8.5 were systematically investigated using a batch method. The maximum sorption capacities of selenite and selenate are 207 and 124 mg/g, respectively, both representing the new records among those of inorganic sorbents. In the low concentration region, Y 2 (OH) 5 Cl·1.5H 2 O is able to almost completely remove selenium from aqueous solution even in the presence of competitive anions such as NO 3 - , Cl - , CO 3 2- , SO 4 2- , and HPO 4 2- . The resulting concentration of selenium is below 10 μg/L, well meeting the strictest criterion for the drinking water. The selenate on loaded samples could be desorbed by rinsing with concentrated noncomplexing NaCl solutions whereas complexing ligands have to be employed to elute selenite for the material regeneration. After desorption, Y 2 (OH) 5 Cl·1.5H 2 O could be reused to remove selenate and selenite. In addition, the sorption mechanism was unraveled by the combination of EDS, FT-IR, Raman, PXRD, and EXAFS techniques. Specifically, the selenate ions were exchanged with chloride ions in the interlayer space, forming outer-sphere complexes. In comparison, besides anion exchange mechanism, the selenite ions were directly bound to the Y 3+ center in the positively charged layer of [Y 2 (OH) 5 (H 2 O)] + through strong bidentate binuclear inner-sphere complexation, consistent with the observation of the higher uptake of selenite over selenate. The results presented in

  3. Airborne hunt for faults in the Portland-Vancouver area

    Blakely, Richard J.; Wells, Ray E.; Yelin, Thomas S.; Stauffer, Peter H.; Hendley, James W.

    1996-01-01

    Geologic hazards in the Portland-Vancouver area include faults entirely hidden by river sediments, vegetation, and urban development. A recent aerial geophysical survey revealed patterns in the Earth's magnetic field that confirm the existence of a previously suspected fault running through Portland. It also indicated that this fault may pose a significant seismic threat. This discovery has enabled the residents of the populous area to better prepare for future earthquakes.

  4. Measurements of the earth radiation budget from satellites during the first GARP global experiment

    NASA Technical Reports Server (NTRS)

    Vonder Haar, T. H.; Campbell, G. G.; Smith, E. A.; Arking, A.; Coulson, K.; Hickey, J.; House, F.; Ingersoll, A.; Jacobowitz, H.; Smith, L.

    1981-01-01

    Radiation budget data (which will aid in climate model development) and solar constant measurements (both to be used for the study of long term climate change and interannual seasonal weather variability) are presented, obtained during Nimbus-6 and Nimbus-7 satellite flights, using wide-field-of-view, scanner, and black cavity detectors. Data on the solar constant, described as a function of the date of measurement, are given. The unweighed mean amounts to 1377 + or - 20 per sq Wm, with a standard deviation of 8 per sq Wm. The new solar data are combined with earlier measurements, and it is suggested that the total absolute energy output of the sun is a minimum at 'solar maximum' and vice versa. Attention is given to the measurements of the net radiation budget, the planetary albedo, and the infrared radiant exitance. The annual and semiannual cycles of normal variability explain most of the variance of energy exchange between the earth and space. Examination of separate ocean and atmospheric energy budgets implies a net continent-ocean region energy exchange.

  5. Modeling the evolution of the lower crust with laboratory derived rheological laws under an intraplate strike slip fault

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Sagiya, T.

    2015-12-01

    The earth's crust can be divided into the brittle upper crust and the ductile lower crust based on the deformation mechanism. Observations shows heterogeneities in the lower crust are associated with fault zones. One of the candidate mechanisms of strain concentration is shear heating in the lower crust, which is considered by theoretical studies for interplate faults [e.g. Thatcher & England 1998, Takeuchi & Fialko 2012]. On the other hand, almost no studies has been done for intraplate faults, which are generally much immature than interplate faults and characterized by their finite lengths and slow displacement rates. To understand the structural characteristics in the lower crust and its temporal evolution in a geological time scale, we conduct a 2-D numerical experiment on the intraplate strike slip fault. The lower crust is modeled as a 20km thick viscous layer overlain by rigid upper crust that has a steady relative motion across a vertical strike slip fault. Strain rate in the lower crust is assumed to be a sum of dislocation creep and diffusion creep components, each of which flows the experimental flow laws. The geothermal gradient is assumed to be 25K/km. We have tested different total velocity on the model. For intraplate fault, the total velocity is less than 1mm/yr, and for comparison, we use 30mm/yr for interplate faults. Results show that at a low slip rate condition, dislocation creep dominates in the shear zone near the intraplate fault's deeper extension while diffusion creep dominates outside the shear zone. This result is different from the case of interplate faults, where dislocation creep dominates the whole region. Because of the power law effect of dislocation creep, the effective viscosity in the shear zone under intraplate faults is much higher than that under the interplate fault, therefore, shear zone under intraplate faults will have a much higher viscosity and lower shear stress than the intraplate fault. Viscosity contract between

  6. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  7. Free Space Laser Communication Experiments from Earth to the Lunar Reconnaissance Orbiter in Lunar Orbit

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Zellar, Ronald S.; Fong, Wai H; Krainak, Michael A.; Neumann, Gregory A.; Smith, David E.

    2013-01-01

    Laser communication and ranging experiments were successfully conducted from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit. The experiments used 4096-ary pulse position modulation (PPM) for the laser pulses during one-way LRO Laser Ranging (LR) operations. Reed-Solomon forward error correction codes were used to correct the PPM symbol errors due to atmosphere turbulence and pointing jitter. The signal fading was measured and the results were compared to the model.

  8. Effects of soil type on leaching and runoff transport of rare earth elements and phosphorous in laboratory experiments.

    PubMed

    Wang, Lingqing; Liang, Tao; Chong, Zhongyi; Zhang, Chaosheng

    2011-01-01

    Through leaching experiments and simulated rainfall experiments, characteristics of vertical leaching of exogenous rare earth elements (REEs) and phosphorus (P) and their losses with surface runoff during simulated rainfall in different types of soils (terra nera soil, cinnamon soil, red soil, loess soil, and purple soil) were investigated. Results of the leaching experiments showed that vertical transports of REEs and P were relatively low, with transport depths less than 6 cm. The vertical leaching rates of REEs and P in the different soils followed the order of purple soil > terra nera soil > red soil > cinnamon soil > loess soil. Results of the simulated rainfall experiments (83 mm h⁻¹) revealed that more than 92% of REEs and P transported with soil particles in runoff. The loss rates of REEs and P in surface runoff in the different soil types were in the order of loess soil > terra nera soil > cinnamon soil > red soil > purple soil. The total amounts of losses of REEs and P in runoff were significantly correlated.

  9. Training for Skill in Fault Diagnosis

    ERIC Educational Resources Information Center

    Turner, J. D.

    1974-01-01

    The Knitting, Lace and Net Industry Training Board has developed a training innovation called fault diagnosis training. The entire training process concentrates on teaching based on the experiences of troubleshooters or any other employees whose main tasks involve fault diagnosis and rectification. (Author/DS)

  10. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  11. Resting Lightly on Mother Earth: The Aboriginal Experience in Urban Educational Settings.

    ERIC Educational Resources Information Center

    Ward, Angela; Bouvier, Rita

    This book examines the differential educational experiences of Aboriginal peoples in urban centers--primarily in Canada, but also in Australia and the United States. Major themes of the book are maintenance of individual and collective Aboriginal identity, the impact on that identity of disconnection from the land, spirituality as the key to…

  12. A conceptual design for cosmo-biology experiments in Earth's Orbit.

    PubMed

    Hashimoto, H; Greenberg, M; Brack, A; Colangeli, L; Horneck, G; Navarro-Gonzalez, R; Raulin, F; Kouchi, A; Saito, T; Yamashita, M; Kobayashi, K

    1998-06-01

    A conceptual design was developed for a cosmo-biology experiment. It is intended to expose simulated interstellar ice materials deposited on dust grains to the space environment. The experimental system consists of a cryogenic system to keep solidified gas sample, and an optical device to select and amplify the ultraviolet part of the solar light for irradiation. By this approach, the long lasting chemical evolution of icy species could be examined in a much shorter time of exposure by amplification of light intensity. The removal of light at longer wavelength, which is ineffective to induce photochemical reactions, reduces the heat load to the cryogenic system that holds solidified reactants including CO as a constituent species of interstellar materials. Other major hardware components were also defined in order to achieve the scientific objectives of this experiment. Those are a cold trap maintained at liquid nitrogen temperature to prevent the contamination of the sample during the exposure, a mechanism to exchange multiple samples, and a system to perform bake-out of the sample exposure chamber. This experiment system is proposed as a candidate payload implemented on the exposed facility of Japanese Experiment Module on International Space Station.

  13. Apollo-Soyuz Pamphlet No. 5: The Earth from Orbit. Apollo-Soyuz Experiments in Space.

    ERIC Educational Resources Information Center

    Page, Lou Williams; Page, Thornton

    This booklet is the fifth in a series of nine that describe the Apollo-Soyuz mission and experiments. This set is designed as a curriculum supplement for high school and college teachers, supervisors, curriculum specialists, textbook writers, and the general public. These booklets provide sources of ideas, examples of the scientific method,…

  14. Shaker Table Experiments with Rare Earth Elements Sorption from Geothermal Brine

    DOE Data Explorer

    Gary Garland

    2015-07-21

    This dataset described shaker table experiments ran with sieved -50 +100 mesh media #1 in brine #1 that have 2ppm each of the 7 REE metals at different starting pH's of 3.5, 4.5, and 5.5. The experimental conditions are 2g media to 150mL of REE solution, at 70C.

  15. Analysis of data from the plasma composition experiment on the International Sun-Earth Explorer (ISEE 1)

    NASA Technical Reports Server (NTRS)

    Lennartsson, O. W.

    1994-01-01

    The Lockheed plasma composition experiment on the ISEE 1 spacecraft has provided one of the largest and most varied sets of data on earth's energetic plasma environment, covering both the solar wind, well beyond the bow shock, and the near equatorial magnetosphere to a distance of almost 23 earth radii. This report is an overview of the last four years of data analysis and archiving. The archiving for NSSDC includes most data obtained during the initial 28-months of instrument operation, from early November 1977 through the end of February 1980. The data products are a combination of spectra (mass and energy angle) and velocity moments. A copy of the data user's guide and examples of the data products are attached as appendix A. The data analysis covers three major areas: solar wind ions upstream and downstream of the day side bowshock, especially He(++) ions; terrestrial ions flowing upward from the auroral regions, especially H(+), O(+), and He(+) ions; and ions of both solar and terrestrial origins in the tail plasma sheet and lobe regions. Copies of publications are attached.

  16. (Re)engineering Earth System Models to Expose Greater Concurrency for Ultrascale Computing: Practice, Experience, and Musings

    NASA Astrophysics Data System (ADS)

    Mills, R. T.

    2014-12-01

    As the high performance computing (HPC) community pushes towards the exascale horizon, the importance and prevalence of fine-grained parallelism in new computer architectures is increasing. This is perhaps most apparent in the proliferation of so-called "accelerators" such as the Intel Xeon Phi or NVIDIA GPGPUs, but the trend also holds for CPUs, where serial performance has grown slowly and effective use of hardware threads and vector units are becoming increasingly important to realizing high performance. This has significant implications for weather, climate, and Earth system modeling codes, many of which display impressive scalability across MPI ranks but take relatively little advantage of threading and vector processing. In addition to increasing parallelism, next generation codes will also need to address increasingly deep hierarchies for data movement: NUMA/cache levels, on node vs. off node, local vs. wide neighborhoods on the interconnect, and even in the I/O system. We will discuss some approaches (grounded in experiences with the Intel Xeon Phi architecture) for restructuring Earth science codes to maximize concurrency across multiple levels (vectors, threads, MPI ranks), and also discuss some novel approaches for minimizing expensive data movement/communication.

  17. An application of miniscale experiments on Earth to refine microgravity analysis of adiabatic multiphase flow in space

    NASA Technical Reports Server (NTRS)

    Rothe, Paul H.; Martin, Christine; Downing, Julie

    1994-01-01

    Adiabatic two-phase flow is of interest to the design of multiphase fluid and thermal management systems for spacecraft. This paper presents original data and unifies existing data for capillary tubes as a step toward assessing existing multiphase flow analysis and engineering software. Comparisons of theory with these data once again confirm the broad accuracy of the theory. Due to the simplicity and low cost of the capillary tube experiments, which were performed on earth, we were able to closely examine for the first time a flow situation that had not previously been examined appreciably by aircraft tests. This is the situation of a slug flow at high quality, near transition to annular flow. Our comparison of software calculations with these data revealed overprediction of pipeline pressure drop by up to a factor of three. In turn, this finding motivated a reexamination of the existing theory, and then development of a new analytical and is in far better agreement with the data. This sequence of discovery illustrates the role of inexpensive miniscale modeling on earth to anticipate microgravity behavior in space and to complete and help define needs for aircraft tests.

  18. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  19. The ClearEarth Project: Preliminary Findings from Experiments in Applying the CLEARTK NLP Pipeline and Annotation Tools Developed for Biomedicine to the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Duerr, R.; Thessen, A.; Jenkins, C. J.; Palmer, M.; Myers, S.; Ramdeen, S.

    2016-12-01

    The ability to quickly find, easily use and effortlessly integrate data from a variety of sources is a grand challenge in Earth sciences, one around which entire research programs have been built. A myriad of approaches to tackling components of this challenge have been demonstrated, often with some success. Yet finding, assessing, accessing, using and integrating data remains a major challenge for many researchers. A technology that has shown promise in nearly every aspect of the challenge is semantics. Semantics has been shown to improve data discovery, facilitate assessment of a data set, and through adoption of the W3C's Linked Data Platform to have improved data integration and use at least for data amenable to that paradigm. Yet the creation of semantic resources has been slow. Why? Amongst a plethora of other reasons, it is because semantic expertise is rare in the Earth and Space sciences; the creation of semantic resources for even a single discipline is labor intensive and requires agreement within the discipline; best practices, methods and tools for supporting the creation and maintenance of the resources generated are in flux; and the human and financial capital needed are rarely available in the Earth sciences. However, other fields, such as biomedicine, have made considerable progress in these areas. The NSF-funded ClearEarth project is adapting the methods and tools from these communities for the Earth sciences in the expectation that doing so will enhance progress and the rate at which the needed semantic resources are created. We discuss progress and results to date, lessons learned from this adaptation process, and describe our upcoming efforts to extend this knowledge to the next generation of Earth and data scientists.

  20. Heaven can wait - or down to earth in real time: Near-death experience revisited.

    PubMed

    van Tellingen, C

    2008-10-01

    Near-death experience (NDE) is an intriguing phenomenon that invites more questions than answers. Hitherto emphasis has been laid on apparent similarities in accounts of NDE to prove a supernatural origin while in fact unique differences besides gross similarities support a neurophysiological explanation. A teleological approach is suggested to explain the neuroprotective strategies involved and accordingly a forme fruste of the biological concept of hibernation is put forward as an unifying hypothesis for clarification. (Neth Heart J 2008;16:359-62.).

  1. Skylab program earth resouces experiment package. Volume 4: Sensor performance evaluation (S193 R/S). [radiometer/scatterometer

    NASA Technical Reports Server (NTRS)

    Kenney, G. P.

    1975-01-01

    The results of the sensor performance evaluation of the 13.9 GHz radiometer/scatterometer, which was part of the earth resources experiment package on Skylab. Findings are presented in the areas of housekeeping parameters, antenna gain and scanning performance, dynamic range, linearity, precision, resolution, stability, integration time, and transmitter output. Supplementary analyses covering performance anomalies, data stream peculiarities, aircraft sensor data comparisons, scatterometer saturation characteristics, and RF heating effects are reported. Results of the evaluation show that instrument performance was generally as expected, but capability degradations were observed to result from three major anomalies. Conclusions are drawn from the evaluation results, and recommendations for improving the effectiveness of a future program are offered. An addendum describes the special evaluation techniques developed and applied in the sensor performance evaluation tasks.

  2. An Experiment to Evaluate Skylab Earth Resources Sensors for Detection of the Gulf Stream. [Straits of Florida

    NASA Technical Reports Server (NTRS)

    Maul, G. A. (Principal Investigator); Gordon, H. R.; Baig, S. R.; Mccaslin, M.; Devivo, R. J.

    1976-01-01

    The author has identified the following significant results. An experiment to evaluate the Skylab earth resources package for observing ocean currents was performed in the Straits of Florida in January 1974. Data from the S190 photographic facility, S191 spectroradiometer and S192 multispectral scanner, were compared with surface observations. The anticyclonic edge of the Gulf Stream could be identified in the Skylab S190A and B photographs, but the cyclonic edge was obscured by clouds. The aircraft photographs were judged not useful for spectral analysis because vignetting caused the blue/green ratios to be dependent on the position in the photograph. The spectral measurement technique could not identify the anticyclonic front, but mass of Florida Bay water which was in the process of flowing into the Straits could be identified and classified. Monte Carlo simulations of the visible spectrum showed that the aerosol concentration could be estimated and a correction technique was devised.

  3. The effect of the low Earth orbit environment on space solar cells: Results of the advanced photovoltaic experiment (S0014)

    NASA Technical Reports Server (NTRS)

    Brinker, David J.; Hickey, John R.

    1992-01-01

    The Advanced Photovoltaic Experiment (APEX), containing over 150 solar cells and sensors, was designed to generate laboratory reference standards as well as to explore the durability of a wide variety of space solar cells. Located on the leading edge of the Long Duration Exposure Facility (LDEF), APEX received the maximum possible dosage of atomic oxygen and ultraviolet radiation, as well as enormous numbers of impacts from micrometeoroids and debris. The effect of the low earth orbital (LEO) environment on the solar cells and materials of APEX will be discussed in this paper. The on-orbit performance of the solar cells, as well as a comparison of pre- and postflight laboratory performance measurements, will be presented.

  4. Improving Science Literacy and Earth Science Awareness Through an Intensive Summer Research Experience in Paleobiology

    NASA Astrophysics Data System (ADS)

    Heim, N. A.; Saltzman, J.; Payne, J.

    2014-12-01

    The chasm between classroom science and scientific research is bridged in the History of Life Internships at Stanford University. The primary foci of the internships are collection of new scientific data and original scientific research. While traditional high school science courses focus on learning content and laboratory skills, students are rarely engaged in real scientific research. Even in experiential learning environments, students investigate phenomena with known outcomes under idealized conditions. In the History of Life Internships, high school youth worked full time during the summers of 2013 and 2014 to collect body size data on fossil Echinoderms and Ostracods, measuring more than 20,000 species in total. These data are contributed to the larger research efforts in the Stanford Paleobiology Lab, but they also serve as a source of data for interns to conduct their own scientific research. Over the course of eight weeks, interns learn about previous research on body size evolution, collect data, develop their own hypotheses, test their hypotheses, and communicate their results to their peers and the larger scientific community: the 2014 interns have submitted eight abstracts to this meeting for the youth session entitled Bright STaRS where they will present their research findings. Based on a post-internship survey, students in the 2013 History of Life cohort had more positive attitudes towards science and had a better understanding of how to conduct scientific research compared to interns in the Earth Sciences General Internship Program, where interns typically do not complete their own research project from start to finish. In 2014, we implemented both pre- and post-internship surveys to determine if these positive attitudes were developed over the course of the internship. Conducting novel research inspires both the students and instructors. Scientific data collection often involves many hours of repetitive work, but answering big questions typically

  5. Launching and Undergraduate Earth System Science Curriculum with a Focus on Global Sustainability: the Loma Linda University Experience

    NASA Astrophysics Data System (ADS)

    Ford, R. E.; Dunbar, S. G.; Soret, S.; Wiafe, S.; Gonzalez, D.; Rossi, T.

    2004-12-01

    The vision of the School of Science and Technology (SST) at Loma Linda University (LLU) is to develop an interdisciplinary approach to doing science that bridges the social, biological, earth, and health sciences. It will provide opportunities for undergraduate, graduate, and professional students to apply new tools and concepts to the promotion of global service and citizenship while addressing issues of global poverty, health and disease, environmental degradation, poverty, and social inequality. A primary teaching strategy will be to involve students with faculty in applied field social and science policy research on "global sustainability" issues and problems in real places such as Fiji, Jamaica, Honduras, Bahamas, East Africa, and the US southwest (Great Basin, Salton Sea, coastal California, southern Utah). Recently we became a partner in the NASA/USRA ESSE21 Project (Earth System Science Education for the 21st Century). We bring to that consortium strengths and experience in areas such as social policy, sustainable development, medicine, environmental health, disaster mitigation, humanitarian relief, geoinformatics and bioinformatics. This can benefit ESSE21, the NASA Earth Enterprise Mission, and the wider geosciences education community by demonstrating the relevance of such tools, and methods outside the geosciences. Many of the graduate and undergraduate students who will participate in the new program come from around the world while many others represent underserved populations in the United States. The PI and Co-PIs have strong global as well as domestic experience serving underrepresented communities, e.g. Seth Wiafe from Ghana, Sam Soret from Spain, Stephen Dunbar from the South Pacific, and Robert Ford from Latin America and Africa. Our partnership in implementation will include other institutions such as: La Sierra University, the California State University, Pomona, Center for Geographic Information Science Research, ESRI, Inc., the University of

  6. Two-way laser ranging and time transfer experiments between LOLA and an Earth-based satellite laser ranging station

    NASA Astrophysics Data System (ADS)

    Mao, D.; Sun, X.; Neumann, G. A.; Barker, M. K.; Mazarico, E. M.; Hoffman, E.; Zagwodzki, T. W.; Torrence, M. H.; Mcgarry, J.; Smith, D. E.; Zuber, M. T.

    2017-12-01

    Satellite Laser Ranging (SLR) has established time-of-flight measurements with mm precision to targets orbiting the Earth and the Moon using single-ended round-trip laser ranging to passive optical retro-reflectors. These high-precision measurements enable advances in fundamental physics, solar system dynamics. However, the received signal strength suffers from a 1/R4 decay, which makes it impractical for measuring distances beyond the Moon's orbit. On the other hand, for a two-way laser transponder pair, where laser pulses are both transmitted to and received from each end of the laser links, the signal strength at both terminals only decreases by 1/R2, thus allowing a greater range of distances to be covered. The asynchronous transponder concept has been previously demonstrated by a test in 2005 between the Mercury Laser Altimeter (MLA) aboard the MESSENGER (MErcury Surface, Space ENvironment, Geochemistry, and Ranging) spacecraft and NASA's Goddard Geophysical and Astronomical Observatory (GGAO) at a distance of ˜0.16 AU. In October 2013, regular two-way transponder-type range measurements were obtained over 15 days between the Lunar Laser Communication Demonstration (LLCD) aboard the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft and NASA's ground station at White Sands, NM. The Lunar Orbiter Laser Altimeter (LOLA) aboard the Lunar Reconnaissance Orbiter (LRO) provides us a unique capability to test time-transfer beyond near Earth orbit. Here we present results from two-way transponder-type experiments between LOLA and GGAO conducted in March 2014 and 2017. As in the time-transfer by laser link (T2L2) experiments between a ground station and an earth-orbiting satellite, LOLA and GGAO ranged to each other simultaneously in these two-way tests at lunar distance. We measured the time-of-flight while cross-referencing the spacecraft clock to the ground station time. On May 4th, 2017, about 20 minutes of two-way measurements were collected. The

  7. Central Japan's Atera Active Fault's Wide-Fractured Zone: An Examination of the Structure and In-situ Crustal Stress

    NASA Astrophysics Data System (ADS)

    Ikeda, R.; Omura, K.; Matsuda, T.; Mizuochi, Y.; Uehara, D.; Chiba, A.; Kikuchi, A.; Yamamoto, T.

    2001-12-01

    In-situ downhole measurements and coring within and around an active fault zone are needed to better understand the structure and material properties of fault rocks as well as the physical state of active faults and intra-plate crust. Particularly, the relationship between the stress concentration state and the heterogeneous strength of an earthquake fault zone is important to estimate earthquake occurrence mechanisms which correspond to the prediction of an earthquake. It is necessary to compare some active faults in different conditions of the chrysalis stage and their relation to subsequent earthquake occurrence. To better understand such conditions, "Active Fault Zone Drilling Project" has been conducted in the central part of Japan by the National Research Institute for Earth Science and Disaster Prevention. The Nojima fault which appeared on the surface by the 1995 Great Kobe earthquake (M=7.2) and the Neodani fault created by the 1981 Nobi earthquake, the greatest inland earthquake M=8.0 in Japan, have been drilled through the fault fracture zones. During these past four years, a similar experiment and research at the Atera fault, of which some parts seem to have been dislocated by the 1586 Tensyo earthquake, has been undertaken. The features of the Atera fault are as follows: (1) total length is about 70 km, (2) general trend is NW45_Kwith a left-lateral strike slip, (3) slip rate is estimated as 3-5 m/1000 yrs. and the average recurrence time as 1700 yrs., (4) seismicity is very low at present, and (5) lithologies around the fault are basically granitic rocks and rhyolite. We have conducted integrated investigations by surface geophysical survey and drilling around the Atera fault. Six boreholes have been drilled from the depth of 400 m to 630 m. Four of these boreholes are located on a line crossing the fracture zone of the Atera fault. Resistivity and gravity structures inferred from surface geophysical surveys were compared with the physical properties

  8. Influence of slip-surface geometry on earth-flow deformation, Montaguto earth flow, southern Italy

    Guerriero, L.; Coe, Jeffrey A.; Revellio, P.; Grelle, G.; Pinto, F.; Guadagno, F.

    2016-01-01

    We investigated relations between slip-surface geometry and deformational structures and hydrologic features at the Montaguto earth flow in southern Italy between 1954 and 2010. We used 25 boreholes, 15 static cone-penetration tests, and 22 shallow-seismic profiles to define the geometry of basal- and lateral-slip surfaces; and 9 multitemporal maps to quantify the spatial and temporal distribution of normal faults, thrust faults, back-tilted surfaces, strike-slip faults, flank ridges, folds, ponds, and springs. We infer that the slip surface is a repeating series of steeply sloping surfaces (risers) and gently sloping surfaces (treads). Stretching of earth-flow material created normal faults at risers, and shortening of earth-flow material created thrust faults, back-tilted surfaces, and ponds at treads. Individual pairs of risers and treads formed quasi-discrete kinematic zones within the earth flow that operated in unison to transmit pulses of sediment along the length of the flow. The locations of strike-slip faults, flank ridges, and folds were not controlled by basal-slip surface topography but were instead dependent on earth-flow volume and lateral changes in the direction of the earth-flow travel path. The earth-flow travel path was strongly influenced by inactive earth-flow deposits and pre-earth-flow drainages whose positions were determined by tectonic structures. The implications of our results that may be applicable to other earth flows are that structures with strikes normal to the direction of earth-flow motion (e.g., normal faults and thrust faults) can be used as a guide to the geometry of basal-slip surfaces, but that depths to the slip surface (i.e., the thickness of an earth flow) will vary as sediment pulses are transmitted through a flow.

  9. Preliminary Experiments for the Assessment of VW-Band Links for Space-Earth Communications

    NASA Technical Reports Server (NTRS)

    Nessel, James A.; Acosta, Roberto J.; Miranda, Felix A.

    2013-01-01

    Since September 2012, NASA Glenn Research Center has deployed a microwave profiling radiometer at White Sands, NM, to estimate atmospheric propagation effects on communications links in the V and W bands (71-86GHz). Estimates of attenuation statistics in the millimeter wave due to gaseous and cloud components of the atmosphere show good agreement with current ITU-R models, but fail to predict link performance in the presence of moderate to heavy rain rates, due to the inherent limitations of passive radiometry. Herein, we discuss the preliminary results of these measurements and describe a design for a terrestrial link experiment to validaterefine existing rain attenuation models in the VW-bands.

  10. Active faults newly identified in Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2012-05-01

    The Bellingham Basin, which lies north of Seattle and south of Vancouver around the border between the United States and Canada in the northern part of the Cascadia subduction zone, is important for understanding the regional tectonic setting and current high rates of crustal deformation in the Pacific Northwest. Using a variety of new data, Kelsey et al. identified several active faults in the Bellingham Basin that had not been previously known. These faults lie more than 60 kilometers farther north of the previously recognized northern limit of active faulting in the area. The authors note that the newly recognized faults could produce earthquakes with magnitudes between 6 and 6.5 and thus should be considered in hazard assessments for the region. (Journal of Geophysical Reserch-Solid Earth, doi:10.1029/2011JB008816, 2012)

  11. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  12. Earth Science Project Office (ESPO) Field Experiences During ORACLES, ATom, KORUS and POSIDON

    NASA Technical Reports Server (NTRS)

    Salazar, Vidal; Zavaleta, Jhony

    2017-01-01

    Very often, scientific field campaigns entail years of planning and incur substantial cost, especially if they involve the operation of large research aircraft in remote locations. Deploying and operating these aircrafts even for short periods of time poses challenges that, if not addressed properly, can have significant negative consequences and potentially jeopardize the success of a scientific campaign. Challenges vary from country to country and range from safety, health, and security risks to differences in cultural and social norms. Our presentation will focus on sharing experiences on the ESPO 2016 conducted field campaigns ORACLES, ATom, KORUS and POSIDON. We will focus on the best practices, lessons learned, international relations and coordination aspects of the country-specific experiences. This presentation will be part of the ICARE Conference (2nd International Conference on Airborne Research for the Environment (ICARE 2017) that will focus on "Developing the infrastructure to meet future scientific challenges". This unique conference and gathering of facility support experts will not only allow for dissemination and sharing of knowledge but also promote collaboration and networking among groups that support scientific research using airborne platforms around the globe.

  13. EMSCOPE - Electromagnetic Component of EarthScope Backbone and Transportable Array Experiments 2006-2008

    NASA Astrophysics Data System (ADS)

    Egbert, G.; Evans, R.; Ingate, S.; Livelybrooks, D.; Mickus, K.; Park, S.; Schultz, A.; Unsworth, M.; Wannamaker, P.

    2007-12-01

    USArray (http://www.iris.edu/USArray) in conjunction with EMSOC (Electromagnetic Studies of the Continents) (http://emsoc.ucr.edu/emsoc) is installing magnetotelluric (MT) stations as part of Earthscope. The MT component of Earthscope consists of permanent (Backbone) and transportable long period stations to record naturally occurring, time varying electric and magnetic fields to produce a regional lithospheric/asthensospheric electrical conductivity map of the United States. The recent arrival of 28 long period MT instruments allows for the final installation of the Backbone stations throughout the US and yearly transportable array studies. The Backbone MT survey consists of 7 stations spaced throughout the continental US with preliminary installation at Soap Creek, Oregon; Parkfield, California; Braden, Missouri and Socorro, New Mexico.Siting and permitting are underway or completed at stations in eastern Montana, northern Wisconsin and Virginia. These stations will be recording for at least five years to determine electrical conductivities at depths that extend into the mantle transition zone. The first transportable array experiment was performed in the summer and fall of 2006 in central and eastern Oregon (Oregon Pilot Project) using equipment loaned from EMSOC. Thirty-one long period MT stations were recorded with 14 to 21 day occupations. Preliminary 3D inverse models indicate several lithospheric electrical conductivity anomalies including a linear zone marked by low-high conductivity transition along the Klamath-Blue Mountain Lineament associated with a linear trend of gravity minima. High electrical conductivity values occur in the upper crust under the accreted terrains in the Blue Mountains region. The second transportable array experiment was performed in the summer and fall of 2007 and completes coverage of the Oregon, Washington, and western Idaho, targeting the Cascadia subduction zone, Precambrian boundaries, and sub-basalt lithologies. The 2008

  14. Fault orientations in extensional and conjugate strike-slip environments and their implications

    Thatcher, W.; Hill, D.P.

    1991-01-01

    Seismically active conjugate strike-slip faults in California and Japan typically have mutually orthogonal right- and left-lateral fault planes. Normal-fault dips at earthquake nucleation depths are concentrated between 40?? and 50??. The observed orientations and their strong clustering are surprising, because conventional faulting theory suggests fault initiation with conjugate 60?? and 120?? intersecting planes and 60?? normal-fault dip or fault reactivation with a broad range of permitted orientations. The observations place new constraints on the mechanics of fault initiation, rotation, and evolutionary development. We speculate that the data could be explained by fault rotation into the observed orientations and deactivation for greater rotation or by formation of localized shear zones beneath the brittle-ductile transition in Earth's crust. Initiation as weak frictional faults seems unlikely. -Authors

  15. Preliminary Experiments for the Assessment of V/W-band Links for Space-Earth Communications

    NASA Technical Reports Server (NTRS)

    Nessel, James A.; Acosta, Roberto J.; Miranda, Felix A.

    2013-01-01

    Since September 2012, NASA Glenn Research Center has deployed a microwave profiling radiometer at White Sands, NM, to estimate atmospheric propagation effects on communications links in the V and W bands (71-86GHz). Estimates of attenuation statistics in the millimeter wave due to gaseous and cloud components of the atmosphere show good agreement with current ITU-R models, but fail to predict link performance in the presence of moderate to heavy rain rates, due to the inherent limitations of passive radiometry. Herein, we discuss the preliminary results of these measurements and describe a design for a terrestrial link experiment to validate/refine existing rain attenuation models in the V/Wbands.

  16. An Experiment to Study Sporadic Sodium Layers in the Earth's Mesosphere and Lower Thermosphere

    NASA Technical Reports Server (NTRS)

    Swenson, Charles M.

    2002-01-01

    The Utah State University / Space Dynamics Lab was funded under a NASA Grant. This investigation has been part of Rockwell Universities Sudden Atom Layer Investigation (SAL). USU/SDL provided an electron density measurement instrument, the plasma frequency probe, which was launched on the vehicle 21.117 from Puerto-Rico in February of 1998. The instrument successfully measured electron density as designed and measurement techniques included in this version of the Plasma Frequency probe provided valuable insight into the electron density structures associated with sudden sodium layers in a collisional plasma. Electron density data was furnished to Rockwell University but no science meetings were held by Rockwell Data from the instrument was presented to the scientific community at the URSI General Session in 1999. A paper is in preparation for publication in Geophysical Research Letters. The following document provides a summary of the experiment and data obtained as a final report on this grant.

  17. Ultraviolet spectroscopy of meteoric debris: In situ calibration experiments from Earth orbit

    NASA Technical Reports Server (NTRS)

    Nuth, J. A., III; Wdowiak, T. J.; Kubinec, W. R.

    1986-01-01

    It is proposed to carry out slitless spectroscopy at ultraviolet wavelengths from orbit of meteoric debris associated with comets. The Eta Aquarid, Orionid/Halley, and the Persied/1962 862 Swift-Tuttle showers would be principal targets. Low light level, ultraviolet video technique will be used during night side of the orbit in a wide field, earthward viewing mode. Data will be stored in compact video cassette recorders. The experiment may be configured as a GAS package or in the HITCHHIKER mode. The latter would allow flexible pointing capability beyond that offered by shuttle orientation of the GAS package, and doubling of the data record. The 1100 to 3200 A spectral region should show emissions of atomic, ionic, and molecular species of interest on cometary and solar system studies.

  18. Ultraviolet spectroscopy of meteoric debris: In situ calibration experiments from earth orbit

    NASA Technical Reports Server (NTRS)

    Nuth, Joseph A.; Wdowiak, Thomas J.; Kubinec, William R.

    1987-01-01

    It is proposed to carry out slitless spectroscopy at ultraviolet wavelengths from orbit of meteoric debris associated with comets. The Eta Aquarid, Orionid/Halley, and the Persied/1962 862 Swift-Tuttle showers would be principal targets. Low light level, ultraviolet video technique will be used during the night side of the orbit in a wide field, earthward viewing mode. Data will be stored in compact video cassette recorders. The experiment may be configured as a GAS package or in the HITCHHIKER mode. The latter would allow flexible pointing capability beyond that offered by shuttle orientation of the GAS package, and doubling of the data record. The 1100 to 3200 A spectral region should show emissions of atomic, ionic, and molecular species of interest on cometary and solar system studies.

  19. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  20. Non-tectonic exposure Rates along Bedrock Fault Scarps in an active Mountain Belt of the central Apennines

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Burrato, Pierfrancesco; Carafa, Michele M. C.; Basili, Roberto

    2017-04-01

    The central Apennines (Italy) are a mountain chain affected by post-collisional active extension along NW-SE striking normal faults and well-documented regional-scale uplift. Moderate to strong earthquakes along the seismogenically active extensional faults are frequent in this area, thus a good knowledge on the characteristics of the hosting faults is necessary for realistic seismic hazard models. The studied bedrock fault surfaces are generally located at various heights on mountain fronts above the local base level of glacio-fluvial valleys and intermountain fluvio-lacustrine basins and are laterally confined to the extent of related mountain fronts. In order to investigate the exposure of the bedrock fault scarps from under their slope-deposit cover, a process that has often been exclusively attributed to co-seismic earthquake slip and used as proxy for tectonic slip rates and earthquake recurrence estimations, we have set up a measurement experiment along various such structures. In this experiment we measure the relative position of chosen markers on the bedrock surface and the material found directly at the contact with its hanging wall. We present the results of monitoring the contact between the exposed fault surfaces and slope deposits at 23 measurement points on 12 different faults over 3.4 year-long observation period. We detected either downward or upward movements of the slope deposit with respect to the fault surface between consecutive measurements. During the entire observation period all points, except one, registered a net downward movement in the 2.9 - 25.6 mm/yr range, resulting in the progressive exposure of the fault surface. During the monitoring period no major earthquakes occurred in the region, demonstrating the measured exposure process is disconnected from seismic activity. We do however observe a positive correlation between the higher exposure in respect to higher average temperatures. Our results indicate that the fault surface

  1. Earth Experiments in a Virtual World: Introducing Climate & Coding to High School Girls

    NASA Astrophysics Data System (ADS)

    Singh, H. A.; Twedt, J. R.

    2017-12-01

    In our increasingly technologically-driven and information-saturated world, literacy in STEM fields can be crucial for career advancement. Nevertheless, both systemic and interpersonal barriers can prevent individuals, particularly members of under-represented groups, from engaging in these fields. Here, we present a high school-level workshop developed to foster basic understanding of climate science while exposing students to the Python programming language. For the past four years, the workshop has been a part of the annual Expanding Your Horizons conference for high school girls, whose mission is to spark interest in STEM fields. Moving through current events in the realm of global climate policy, the fundamentals of climate, and the mathematical representation of planetary energy balance, the workshop culminates in an under-the-hood exploration of a basic climate model coded in the Python programming language. Students interact directly with the underlying code to run `virtual world' experiments that explore the impact of solar insolation, planetary albedo, the greenhouse effect, and meridional energy transport on global temperatures. Engagement with Python is through the Jupyter Notebook interface, which permits direct interaction with the code but is more user-friendly for beginners than a command-line approach. We conclude with further ideas for providing online access to workshop materials for educators, and additional venues for presenting such workshops to under-represented groups in STEM.

  2. Theory of plasma contactors in ground-based experiments and low Earth orbit

    NASA Technical Reports Server (NTRS)

    Gerver, M. J.; Hastings, Daniel E.; Oberhardt, M. R.

    1990-01-01

    Previous theoretical work on plasma contactors as current collectors has fallen into two categories: collisionless double layer theory (describing space charge limited contactor clouds) and collisional quasineutral theory. Ground based experiments at low current are well explained by double layer theory, but this theory does not scale well to power generation by electrodynamic tethers in space, since very high anode potentials are needed to draw a substantial ambient electron current across the magnetic field in the absence of collisions (or effective collisions due to turbulence). Isotropic quasineutral models of contactor clouds, extending over a region where the effective collision frequency upsilon sub e exceeds the electron cyclotron frequency omega sub ce, have low anode potentials, but would collect very little ambient electron current, much less than the emitted ion current. A new model is presented, for an anisotropic contactor cloud oriented along the magnetic field, with upsilon sub e less than omega sub ce. The electron motion along the magnetic field is nearly collisionless, forming double layers in that direction, while across the magnetic field the electrons diffuse collisionally and the potential profile is determined by quasineutrality. Using a simplified expression for upsilon sub e due to ion acoustic turbulence, an analytic solution has been found for this model, which should be applicable to current collection in space. The anode potential is low and the collected ambient electron current can be several times the emitted ion current.

  3. Contamination Examples and Lessons from Low Earth Orbit Experiments and Operational Hardware

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Finckenor, Miria M.

    2009-01-01

    Flight experiments flown on the Space Shuttle, the International Space Station, Mir, Skylab, and free flyers such as the Long Duration Exposure Facility, the European Retrievable Carrier, and the EFFU, provide multiple opportunities for the investigation of molecular contamination effects. Retrieved hardware from the Solar Maximum Mission satellite, Mir, and the Hubble Space Telescope has also provided the means gaining insight into contamination processes. Images from the above mentioned hardware show contamination effects due to materials processing, hardware storage, pre-flight cleaning, as well as on-orbit events such as outgassing, mechanical failure of hardware in close proximity, impacts from man-made debris, and changes due to natural environment factors.. Contamination effects include significant changes to thermal and electrical properties of thermal control surfaces, optics, and power systems. Data from several flights has been used to develop a rudimentary estimate of asymptotic values for absorptance changes due to long-term solar exposure (4000-6000 Equivalent Sun Hours) of silicone-based molecular contamination deposits of varying thickness. Recommendations and suggestions for processing changes and constraints based on the on-orbit observed results will be presented.

  4. Salt balance: From space experiments to revolutionizing new clinical concepts on earth - A historical review

    NASA Astrophysics Data System (ADS)

    Gerzer, Rupert

    2014-11-01

    For a long time, sodium balance appeared to be a ;done deal; and was thought to be well understood. However, experiments in preparation of space missions showed that the concept of osmotic sodium storage and close correlations of sodium with water balance are only part of the regulatory mechanisms of body salt. By now it has turned out that the human skin is an important storage place and regulator for sodium, that sodium storage involves macrophages which in turn salt-dependently co-regulate blood pressure, that body sodium also strongly influences bone and protein metabolism, and that immune functions are also strongly influenced by sodium. In addition, the aging process appears to lead to increased body sodium storage, which in turn might influence the aging process of the human body. The current review article summarizes the developments that have led to these revolutionizing new findings and concepts as well as consequences deriving from these findings. Therefore, it is not intended in this article to give a complete literature overview over the whole field but to focus on such key literature and considerations that led to the respective developments.

  5. Strontium-free rare earth perovskite ferrites with fast oxygen exchange kinetics: Experiment and theory

    NASA Astrophysics Data System (ADS)

    Berger, Christian; Bucher, Edith; Windischbacher, Andreas; Boese, A. Daniel; Sitte, Werner

    2018-03-01

    The Sr-free mixed ionic electronic conducting perovskites La0.8Ca0.2FeO3-δ (LCF82) and Pr0.8Ca0.2FeO3-δ (PCF82) were synthesized via a glycine-nitrate process. Crystal structure, phase purity, and lattice constants were determined by XRD and Rietveld analysis. The oxygen exchange kinetics and the electronic conductivity were obtained from in-situ dc-conductivity relaxation experiments at 600-800 °C and 1×10-3≤pO2/bar≤0.1. Both LCF82 and PCF82 show exceptionally fast chemical surface exchange coefficients and chemical diffusion coefficients of oxygen. The oxygen nonstochiometry of LCF82 and PCF82 was determined by precision thermogravimetry. A point defect model was used to calculate the thermodynamic factors of oxygen and to estimate self-diffusion coefficients and ionic conductivities. Density Functional Theory (DFT) calculations on the crystal structure, oxygen vacancy formation as well as oxygen migration energies are in excellent agreement with the experimental values. Due to their favourable properties both LCF82 and PCF82 are of interest for applications in solid oxide fuel cell cathodes, solid oxide electrolyser cell anodes, oxygen separation membranes, catalysts, or electrochemical sensors.

  6. Long term fault system reorganization of convergent and strike-slip systems

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.; McBeck, J.; Hatem, A. E.; Toeneboehn, K.; Beyer, J. L.

    2017-12-01

    Laboratory and numerical experiments representing deformation over many earthquake cycles demonstrate that fault evolution includes episodes of fault reorganization that optimize work on the fault system. Consequently, the mechanical and kinematic efficiencies of fault systems do not increase monotonically through their evolution. New fault configurations can optimize the external work required to accommodate deformation, suggesting that changes in system efficiency can drive fault reorganization. Laboratory evidence and numerical results show that fault reorganization within accretion, strike-slip and oblique convergent systems is associated with increasing efficiency due to increased fault slip (frictional work and seismic energy) and commensurate decreased off-fault deformation (internal work and work against gravity). Between episodes of fault reorganization, fault systems may become less efficient as they produce increasing off fault deformation. For example, laboratory and numerical experiments show that the interference and interaction between different fault segments may increase local internal work or that increasing convergence can increase work against gravity produced by a fault system. This accumulation of work triggers fault reorganization as stored work provides the energy required to grow new faults that reorganize the system to a more efficient configuration. The results of laboratory and numerical experiments reveal that we should expect crustal fault systems to reorganize following periods of increasing inefficiency, even in the absence of changes to the tectonic regime. In other words, fault reorganization doesn't require a change in tectonic loading. The time frame of fault reorganization depends on fault system configuration, strain rate and processes that relax stresses within the crust. For example, stress relaxation may keep pace with stress accumulation, which would limit the increase in the internal work and gravitational work so that

  7. Imaging the Alpine Fault: preliminary results from a detailed 3D-VSP experiment at the DFDP-2 drill site in Whataroa, New Zealand

    NASA Astrophysics Data System (ADS)

    Lay, Vera; Bodenburg, Sascha; Buske, Stefan; Townend, John; Kellett, Richard; Savage, Martha; Schmitt, Douglas; Constantinou, Alexis; Eccles, Jennifer; Lawton, Donald; Hall, Kevin; Bertram, Malcolm; Gorman, Andrew

    2017-04-01

    The plate-bounding Alpine Fault in New Zealand is an 850 km long transpressive continental fault zone that is late in its earthquake cycle. The Deep Fault Drilling Project (DFDP) aims to deliver insight into the geological structure of this fault zone and its evolution by drilling and sampling the Alpine Fault at depth. Previously analysed 2D reflection seismic data image the main Alpine Fault reflector at a depth of 1.5-2.2 km with a dip of approximately 48° to the southeast below the DFDP-2 borehole. Additionally, there are indications of a more complex 3D fault structure with several fault branches which have not yet been clearly imaged in detail. For that reason we acquired a 3D-VSP seismic data set at the DFDP-2 drill site in January 2016. A zero-offset VSP and a walk-away VSP survey were conducted using a Vibroseis source. Within the borehole, a permanently installed "Distributed Acoustic Fibre Optic Cable" (down to 893 m) and a 3C Sercel slimwave tool (down to 400 m) were used to record the seismic wavefield. In addition, an array of 160 three-component receivers with a spacing of 10 m perpendicular and 20 m parallel to the main strike of the Alpine Fault was set up and moved successively along the valley to record reflections from the main Alpine Fault zone over a broad depth range and to derive a detailed 3D tomographic velocity model in the hanging wall. We will show a detailed 3D velocity model derived from first-arrival traveltime tomography. Subsets of the whole data set were analysed separately to estimate the corresponding ray coverage and the reliability of the observed features in the obtained velocity model. By testing various inversion parameters and starting models, we derived a detailed near-surface velocity model that reveals the significance of the old glacial valley structures. Hence, this new 3D model improves the velocity model derived previously from a 2D seismic profile line in that area. Furthermore, processing of the dense 3C data

  8. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  9. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  10. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space

  11. Fault rocks as indicators of slip behavior

    NASA Astrophysics Data System (ADS)

    Hayman, N. W.

    2017-12-01

    Forty years ago, Sibson ("Fault rocks and fault mechanisms", J. Geol. Soc. Lon., 1977) explored plastic flow mechanisms in the upper and lower crust which he attributed to deformation rates faster than tectonic ones, but slower than earthquakes. We can now combine observations of natural fault rocks with insights from experiments to interpret a broad range of length and time scales of fault slip in more detail. Fault rocks are generally weak, with predominantly frictionally stable materials in some fault segments, and more unstable materials in others. Both upper and lower crustal faults contain veins and mineralogical signatures of transiently elevated fluid pressure, and some contain relicts of pseudotachylite and bear other thermal-mechanical signatures of seismic slip. Varying strain rates and episodic-tremor-and-slip (ETS) have been attributed to fault zones with varying widths filled with irregular foliations, veins, and dismembered blocks of varying sizes. Particle-size distributions and orientations in gouge appear to differ between locked and creeping faults. These and other geologic observations can be framed in terms of constitutive behaviors derived from experiments and modeling. The experimental correlation of velocity-dependence with microstructure and the behavior of natural fault-rocks under shear suggest that friction laws may be applied liberally to fault-zone interpretation. Force-chains imaged in stress-sensitive granular aggregates or in numerical simulations show that stick-slip behavior with stress drops far below that of earthquakes can occur during quasi-periodic creep, yet localize shear in larger, aperiodic events; perhaps the systematic relationship between sub-mm shear bands and surrounding gouge and/or cataclasites causes such slip partitioning in nature. Fracture, frictional sliding, and viscous creep can experimentally produce a range of slip behavior, including ETS-like events. Perhaps a similar mechanism occurs to cause ETS at the

  12. Earth meandering

    NASA Astrophysics Data System (ADS)

    Asadiyan, H.; Zamani, A.

    2009-04-01

    In this paper we try to put away current Global Tectonic Model to look the tectonic evolution of the earth from new point of view. Our new dynamic model is based on study of river meandering (RM) which infer new concept as Earth meandering(EM). In a universal gravitational field if we consider a clockwise spiral galaxy model rotate above Ninety East Ridge (geotectonic axis GA), this system with applying torsion field (likes geomagnetic field) in side direction from Rocky Mt. (west geotectonic pole WGP) to Tibetan plateau TP (east geotectonic pole EGP),it seems that pulled mass from WGP and pushed it in EGP due to it's rolling dynamics. According to this idea we see in topographic map that North America and Green land like a tongue pulled from Pacific mouth toward TP. Actually this system rolled or meander the earth over itself fractaly from small scale to big scale and what we see in the river meandering and Earth meandering are two faces of one coin. River transport water and sediments from high elevation to lower elevation and also in EM, mass transport from high altitude-Rocky Mt. to lower altitude Himalaya Mt. along 'S' shape geodetic line-optimum path which connect points from high altitude to lower altitude as kind of Euler Elastica(EE). These curves are responsible for mass spreading (source) and mass concentration (sink). In this regard, tiltness of earth spin axis plays an important role, 'S' are part of sigmoidal shape which formed due to intersection of Earth rolling with the Earth glob and actual feature of transform fault and river meandering. Longitudinal profile in mature rivers as a part of 'S' curve also is a kind of EE. 'S' which bound the whole earth is named S-1(S order 1) and cube corresponding to this which represent Earth fracturing in global scale named C-1(cube order 1 or side vergence cube SVC), C-1 is a biggest cycle of spiral polygon, so it is not completely closed and it has separation about diameter of C-7. Inside SVC we introduce cone

  13. Experiences in Bridging the Gap Between Science and Decision Making at NASAs GSFC Earth Sciences Data and Information Services Center (GES DISC)

    NASA Astrophysics Data System (ADS)

    Kempler, S.; Teng, W.; Friedl, L.; Lynnes, C.

    2008-12-01

    In recognizing the significance of NASA remote sensing Earth science data in monitoring and better understanding our planet's natural environment, NASA has implemented the 'Decision Support Through Earth Science Research Results' program to solicit "proposals that develop and demonstrate innovative and practicable applications of NASA Earth science observations and research"that focus on improving decision making activities", as stated in the NASA ROSES-2008, A.18 solicitation. This very successful program has yielded several monitoring, surveillance, and decision support systems through collaborations with benefiting organizations in the areas of agriculture, air quality, disaster management, ecosystems, public health, water resources, and aviation weather. The Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) has participated in this program on two projects (one complete, one ongoing), and has had opportune ad hoc collaborations gaining much experience in the formulation, management, development, and implementation of decision support systems utilizing NASA Earth science data. Coupling this experience with the GES DISC's total understanding and vast experience regarding Earth science missions and resulting data and information, including data structures, data usability and interpretation, data interoperability, and information management systems, the GES DISC is in the unique position to more readily identify challenges that come with bringing science data to decision makers. These challenges consist of those that can be met within typical science data usage frameworks, as well as those challenges that arise when utilizing science data for previously unplanned applications, such as decision support systems. The purpose of this presentation is to share GES DISC decision support system project experiences in regards to system sustainability, required data quality (versus timeliness), data provider understanding how

  14. Estimating Fault Friction From Seismic Signals in the Laboratory

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Hulbert, Claudia; Bolton, David C.; Ren, Christopher X.; Riviere, Jacques; Marone, Chris; Guyer, Robert A.; Johnson, Paul A.

    2018-02-01

    Nearly all aspects of earthquake rupture are controlled by the friction along the fault that progressively increases with tectonic forcing but in general cannot be directly measured. We show that fault friction can be determined at any time, from the continuous seismic signal. In a classic laboratory experiment of repeating earthquakes, we find that the seismic signal follows a specific pattern with respect to fault friction, allowing us to determine the fault's position within its failure cycle. Using machine learning, we show that instantaneous statistical characteristics of the seismic signal are a fingerprint of the fault zone shear stress and frictional state. Further analysis of this fingerprint leads to a simple equation of state quantitatively relating the seismic signal power and the friction on the fault. These results show that fault zone frictional characteristics and the state of stress in the surroundings of the fault can be inferred from seismic waves, at least in the laboratory.

  15. The EOS Aqua/Aura Experience: Lessons Learned on Design, Integration, and Test of Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Nosek, Thomas P.

    2004-01-01

    NASA and NOAA earth observing satellite programs are flying a number of sophisticated scientific instruments which collect data on many phenomena and parameters of the earth's environment. The NASA Earth Observing System (EOS) Program originated the EOS Common Bus approach, which featured two spacecraft (Aqua and Aura) of virtually identical design but with completely different instruments. Significant savings were obtained by the Common Bus approach and these lessons learned are presented as information for future program requiring multiple busses for new diversified instruments with increased capabilities for acquiring earth environmental data volume, accuracy, and type.

  16. Maneuver Classification for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.

    2003-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  17. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  18. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data is a reasonable match to known examples of proper operation. In our domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. We explain where this subsystem fits into our envisioned fault detection system as well its experiments showing the promise of this classification subsystem.

  19. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  20. Ras Labs-CASIS-ISS NL experiment for synthetic muscle returned to Earth: resistance to ionizing radiation

    NASA Astrophysics Data System (ADS)

    Rasmussen, Lenore; Albers, Leila N.; Rodriguez, Simone; Gentile, Charles; Meixler, Lewis D.; Ascione, George; Hitchner, Robert; Taylor, James; Hoffman, Dan; Cylinder, David; Gaza, Ramona; Moy, Leon; Mark, Patrick S.; Prillaman, Daniel L.; Nodarse, Robert; Menegus, Michael J.; Ratto, Jo Ann; Thellen, Christopher T.; Froio, Danielle; Valenza, Logan; Poirier, Catherine; Sinkler, Charles; Corl, Dylan; Hablani, Surbhi; Fuerst, Tyler; Gallucci, Sergio; Blocher, Whitney; Liffland, Stephanie

    2017-04-01

    In anticipation of deep space travel, new materials are being explored to assist and relieve humans in dangerous environments, such as high radiation, extreme temperature, and extreme pressure. Ras Labs Synthetic Muscle™ - electroactive polymers (EAPs) that contract and expand at low voltages - which mimic the unique gentle-yet-strong nature of human tissue, is a potential asset to manned space travel through protective gear and human assist robotics and for unmanned space exploration through deep space. Gen 3 Synthetic Muscle™ was proven to be resistant to extreme temperatures, and there were indications that these materials would also be radiation resistant. The purpose of the Ras Labs-CASIS-ISS Experiment was to test the radiation resistivity of the third and fourth generation of these EAPs, as well as to make them even more radiation resistant. On Earth, exposure of the Generation 3 and Generation 4 EAPs to a Cs-137 radiation source for 47.8 hours with a total dose of 305.931 kRad of gamma radiation was performed at the US Department of Energy's Princeton Plasma Physics Laboratory (PPPL) at Princeton University, followed by pH, peroxide, Shore Hardness durometer, and electroactivity testing to determine the inherent radiation resistivity of these contractile EAPs, and to determine whether the EAPs could be made even more radiation resistant through the application of appropriate additives and coatings. The on Earth preliminary tests determined that selected Ras Labs EAPs were not only inherently radiation resistant, but with the appropriate coatings and additives, could be made even more radiation resistant. G-force testing to over 10 G's was performed at US Army's ARDEC Labs, with excellent results, in preparation for space flight to the International Space Station National Laboratory (ISS-NL). Selected samples of Generation 3 and Generation 4 Synthetic Muscle™, with various additives and coatings, were launched to the ISS-NL on April 14, 2015 on the

  1. Radiometer offsets and count conversion coefficients for the Earth Radiation Budget Experiment (ERBE) spacecraft for the years 1984, 1985, and 1986

    NASA Technical Reports Server (NTRS)

    Paden, Jack; Pandey, Dhirendra K.; Shivakumar, Netra D.; Stassi, Joseph C.; Wilson, Robert; Bolden, William; Thomas, Susan; Gibson, M. Alan

    1991-01-01

    A compendium is presented of the ground and inflight scanner and nonscanner offsets and count conversion (gain) coefficients used for the Earth Radiation Budget Experiment (ERBE) production processing of data from the ERBS, NOAA-9, and NOAA-10 satellites for the 1 Nov. 1984 to 31 Dec. 1986.

  2. Precession of the Earth as the Cause of Geomagnetism: Experiments lend support to the proposal that precessional torques drive the earth's dynamo.

    PubMed

    Malkus, W V

    1968-04-19

    I have proposed that the precessional torques acting on the earth can sustain a turbulent hydromagnetic flow in the molten core. A gross balance of the Coriolis force, the Lorentz force, and the precessional force in the core fluid provided estimates of the fluid velocity and the interior magnetic field characteristic of such flow. Then these numbers and a balance of the processes responsible for the decay and regeneration of the magnetic field provided an estimate of the magnetic field external to the core. This external field is in keeping with the observations, but its value is dependent upon the speculative value for the electrical conductivity of core material. The proposal that turbulent flow due to precession can occur in the core was tested in a study of nonmagnetic laboratory flows induced by the steady precession of fluid-filled rotating spheroids. It was found that these flows exhibit both small wavelike instabilities and violent finite-amplitude instability to turbulent motion above critical values of the precession rate. The observed critical parameters indicate that a laminar flow in the core, due to the earth's precession, would have weak hydrodynamic instabilities at most, but that finite-amplitude hydromagnetic instability could lead to fully turbulent flow.

  3. Teaching earth science

    Alpha, Tau Rho; Diggles, Michael F.

    1998-01-01

    This CD-ROM contains 17 teaching tools: 16 interactive HyperCard 'stacks' and a printable model. They are separated into the following categories: Geologic Processes, Earthquakes and Faulting, and Map Projections and Globes. A 'navigation' stack, Earth Science, is provided as a 'launching' place from which to access all of the other stacks. You can also open the HyperCard Stacks folder and launch any of the 16 stacks yourself. In addition, a 17th tool, Earth and Tectonic Globes, is provided as a printable document. Each of the tools can be copied onto a 1.4-MB floppy disk and distributed freely.

  4. Experiences in Bridging the Gap between Science and Decision Making at NASA's GSFC Earth Science Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Kempler, Steven; Teng, Bill; Friedl, Lawrence; Lynnes, Chris; Leptoukh, Gregory

    2008-01-01

    Recognizing the significance of NASA remote sensing Earth science data in monitoring and better understanding our planet s natural environment, NASA has implemented the Decision Support Through Earth Science Research Results program (NASA ROSES solicitations). a) This successful program has yielded several monitoring, surveillance, and decision support systems through collaborations with benefiting organizations. b) The Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) has participated in this program on two projects (one complete, one ongoing), and has had opportune ad hoc collaborations gaining much experience in the formulation, management, development, and implementation of decision support systems utilizing NASA Earth science data. c) In addition, GES DISC s understanding of Earth science missions and resulting data and information, including data structures, data usability and interpretation, data interoperability, and information management systems, enables the GES DISC to identify challenges that come with bringing science data to decision makers. d) The purpose of this presentation is to share GES DISC decision support system project experiences in regards to system sustainability, required data quality (versus timeliness), data provider understanding of how decisions are made, and the data receivers willingness to use new types of information to make decisions, as well as other topics. In addition, defining metrics that really evaluate success will be exemplified.

  5. Some characteristic differences in the earth's radiation budget over land and ocean derived from the Nimbus-7 ERB experiment

    NASA Technical Reports Server (NTRS)

    Kyle, H. L.; Vasanth, K. L.

    1986-01-01

    Broad spectral band data derived from the Nimbus-7 Earth Radiation Budget experiment are analyzed for the top-of-the-atmosphere noon vs. midnight variations in the exitant longwave flux density, spectral variations in the regional albedos, and differences in land and ocean net radiation budgets. The data were studied for a year (June 1979 to May 1980) on a global scale and for five selected study areas. The annual global total, near-UV visible, and near-IR albedo values, obtained were 30.2, 34.6, and 25.9, respectively, with marked differences in behavior between oceanic and continental regions. Over the continents, clouds and snow sharply decreased the near-IR albedo. The over-the-continent noon-emitted flux density averages were 15-25 W/sq m larger than the midnight values, with large regional and seasonal variations. Over the oceans, the average noon and midnight outgoing longwave-flux densities were nearly identical, with regional aqnd seasonal differences of several watts per square meter.

  6. Stability of fault submitted to fluid injections

    NASA Astrophysics Data System (ADS)

    Brantut, N.; Passelegue, F. X.; Mitchell, T. M.

    2017-12-01

    Elevated pore pressure can lead to slip reactivation on pre-existing fractures and faults when the coulomb failure point is reached. From a static point of view, the reactivation of fault submitted to a background stress (τ0) is a function of the peak strength of the fault, i.e. the quasi-static effective friction coefficient (µeff). However, this theory is valid only when the entire fault is affected by fluid pressure, which is not the case in nature, and during human induced-seismicity. In this study, we present new results about the influence of the injection rate on the stability of faults. Experiments were conducted on a saw-cut sample of westerly granite. The experimental fault was 8 cm length. Injections were conducted through a 2 mm diameter hole reaching the fault surface. Experiments were conducted at four different order magnitudes fluid pressure injection rates (from 1 MPa/minute to 1 GPa/minute), in a fault system submitted to 50 and 100 MPa confining pressure. Our results show that the peak fluid pressure leading to slip depends on injection rate. The faster the injection rate, the larger the peak fluid pressure leading to instability. Wave velocity surveys across the fault highlighted that decreasing the injection-rate leads to an increase of size of the fluid pressure perturbation. Our result demonstrate that the stability of the fault is not only a function of the fluid pressure requires to reach the failure criterion, but is mainly a function of the ratio between the length of the fault affected by fluid pressure and the total fault length. In addition, we show that the slip rate increases with the background effective stress and with the intensity of the fluid pressure pertubation, i.e. with the excess shear stress acting on the part of the fault pertubated by fluid injection. Our results suggest that crustal fault can be reactivated by local high fluid overpressures. These results could explain the "large" magnitude human-induced earthquakes

  7. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994

  8. Measure the Earth's Radius and the Speed of Light with Simple and Inexpensive Computer-Based Experiments

    ERIC Educational Resources Information Center

    Martin, Michael J.

    2004-01-01

    With new and inexpensive computer-based methods, measuring the speed of light and the Earth's radius--historically difficult endeavors--can be simple enough to be tackled by high school and college students working in labs that have limited budgets. In this article, the author describes two methods of estimating the Earth's radius using two…

  9. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  10. Experiences in Applying Earth Observing Satellite Technology in SERVIR Regions with an Emphasis on Disasters: Successes, Lessons and Paths Forward

    NASA Technical Reports Server (NTRS)

    Anderson, Eric

    2017-01-01

    Earth observing satellites offer a unique perspective of our environment from the vantage point of space. Repeated measurements of the Earths subsystems such as the biosphere, atmosphere, lithosphere, hydrosphere, and of humans interactions with their environments, allow for a better understanding of Earth system processes, and they can provide input for decision making in areas of environmental management and disaster risk reduction. SERVIR is a joint initiative of the US National Aeronautics and Space Administration (NASA) and the US Agency for International Development (USAID) that began in 2005 and has been active in applying Earth observations for sustainable development in many regions around the world, recently the Lower Mekong and West Africa regions. This talk will highlight some successes achieved and lessons learned through SERVIR in Central America, Eastern Southern Africa, and the Hindu Kush-Himalaya region, focusing on disasters. We will also present opportunities for enhanced decision making with Earth observations and geospatial technologies in the Lower Mekong region.

  11. Albedo of cold sea ice with precipitated salt on the tropical ocean of Snowball Earth: field measurements and laboratory experiments

    NASA Astrophysics Data System (ADS)

    Light, B.; Black, T.; Carns, R.; Brandt, R.; Dadic, R.; Warren, S.

    2012-04-01

    During the initial freezing of the tropical ocean on Snowball Earth, the first ice to form would be sea ice, which contains salt within liquid brine inclusions. At temperatures below -23 C, significant amounts of salt begin to crystallize within the brine inclusions. These crystals scatter light, increasing the ice albedo. The most abundant salt is hydrohalite, NaCl.2H2O. A dry tropical atmosphere promoting ice surface sublimation would cause a salt crust to be left on the surface as a lag deposit. Such a high-albedo surface could be crucial during the snowball initiation. These processes must be considered when assigning albedos to sea ice in a climate model of Snowball Earth. Precipitation of salt within brine inclusions was observed on windswept bare ice of McMurdo Sound at the coast of Antarctica (78 S) in late winter. Consequently the albedo was higher at lower temperature. The precipitation process exhibited hysteresis, with hydrohalite precipitating at about -30 C and dissolving at about -23 C. The causes of the hysteresis are being investigated in laboratory experiments; they may involve biological macromolecules. Nowhere on the modern Earth does sea ice undergo sublimation at low temperatures for long enough to develop a salt crust before the summer melt begins, so this process is being investigated in our laboratory. A 1000-liter tank is used to grow artificial sea ice, and a system has been built to measure its albedo. A diffusely reflecting hemispherical dome of diameter 1.2 m is placed on top of the tank and illuminated from within. The interior of the dome illuminates the ice surface as well as serving as a platform for detecting the incident and backscattered radiance fields. The diffusely reflecting surfaces of the ice and the dome make it straightforward to estimate incoming and reflected irradiance as angular integrals of the radiance measurements. The albedo of the bare, cold (below -23 C) ice is 0.8 at visible wavelengths, decreasing toward the

  12. Creating Research-Rich Learning Experiences and Quantitative Skills in a 1st Year Earth Systems Course

    NASA Astrophysics Data System (ADS)

    King, P. L.; Eggins, S.; Jones, S.

    2014-12-01

    We are creating a 1st year Earth Systems course at the Australian National University that is built around research-rich learning experiences and quantitative skills. The course has top students including ≤20% indigenous/foreign students; nonetheless, students' backgrounds in math and science vary considerably posing challenges for learning. We are addressing this issue and aiming to improve knowledge retention and deep learning by changing our teaching approach. In 2013-2014, we modified the weekly course structure to a 1hr lecture; a 2hr workshop with hands-on activities; a 2hr lab; an assessment piece covering all face-to-face activities; and a 1hr tutorial. Our new approach was aimed at: 1) building student confidence with data analysis and quantitative skills through increasingly difficult tasks in science, math, physics, chemistry, climate science and biology; 2) creating effective learning groups using name tags and a classroom with 8-person tiered tables; 3) requiring students to apply new knowledge to new situations in group activities, two 1-day field trips and assessment items; 4) using pre-lab and pre-workshop exercises to promote prior engagement with key concepts; 5) adding open-ended experiments to foster structured 'scientific play' or enquiry and creativity; and 6) aligning the assessment with the learning outcomes and ensuring that it contains authentic and challenging southern hemisphere problems. Students were asked to design their own ocean current experiment in the lab and we were astounded by their ingenuity: they simulated the ocean currents off Antarctica; varied water density to verify an equation; and examined the effect of wind and seafloor topography on currents. To evaluate changes in student learning, we conducted surveys in 2013 and 2014. In 2014, we found higher levels of student engagement with the course: >~80% attendance rates and >~70% satisfaction (20% neutral). The 2014 cohort felt that they were more competent in writing

  13. Stacking-fault nucleation on Ir(111).

    PubMed

    Busse, Carsten; Polop, Celia; Müller, Michael; Albe, Karsten; Linke, Udo; Michely, Thomas

    2003-08-01

    Variable temperature scanning tunneling microscopy experiments reveal that in Ir(111) homoepitaxy islands nucleate and grow both in the regular fcc stacking and in the faulted hcp stacking. Analysis of this effect in dependence on deposition temperature leads to an atomistic model of stacking-fault formation: The large, metastable stacking-fault islands grow by sufficiently fast addition of adatoms to small mobile adatom clusters which occupy in thermal equilibrium the hcp sites with a significant probability. Using parameters derived independently by field ion microscopy, the model accurately describes the results for Ir(111) and is expected to be valid also for other surfaces.

  14. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  15. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  16. Implementation of a model based fault detection and diagnosis for actuation faults of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1992-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  17. Advanced information processing system: Fault injection study and results

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.

    1992-01-01

    The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.

  18. Preliminary Results of the GPS Flight Experiment on the High Earth Orbit AMSAT-OSCAR 40 Spacecraft

    NASA Technical Reports Server (NTRS)

    Moreau, Michael C.; Bauer, Frank H.; Carpenter, J. Russell; Davis, Edward P.; Davis, George W.; Jackson, Larry A.

    2002-01-01

    The GPS flight experiment on the High Earth Orbit (HEO) AMSAT-OSCAR 40 (AO-40) spacecraft was activated for a period of approximately six weeks between 25 September and 2 November, 2001, and the initial results have exciting implications for using GPS as a low-cost orbit determination sensor for future HEO missions. AO-40, an amateur radio satellite launched November 16, 2000, is currently in a low inclination, 1000 by 58,800 km altitude orbit. Although the GPS receiver was not initialized in any way, it regularly returned GPS observations from points all around the orbit. Raw signal to noise levels as high as 9 AMUs (Trimble Amplitude Measurement Units) or approximately 48 dB-Hz have been recorded at apogee, when the spacecraft was close to 60,000 km in altitude. On several occasions when the receiver was below the GPS constellation (below 20,000 krn altitude), observations were reported for GPS satellites tracked through side lobe transmissions. Although the receiver has not returned any point solutions, there has been at least one occasion when four satellites were tracked simultaneously, and this short arc of data was used to compute point solutions after the fact. These results are encouraging, especially considering the spacecraft is currently in a spin-stabilized attitude mode that narrows the effective field of view of the receiving antennas and adversely affects GPS tracking. Already AO-40 has demonstrated the feasibility of recording GPS observations in HEO using an unaided receiver. Furthermore, it is providing important information about the characteristics of GPS signals received by a spacecraft in a HEO, which has long been of interest to many in the GPS community. Based on the data returned so far, the tracking performance is expected to improve when the spacecraft is transitioned to a three axis stabilized, nadir pointing attitude in Summer, 2002.

  19. The Burst and Transient Source Experiment (BATSE) Earth Occultation Catalog of Low-Energy Gamma-Ray Sources

    NASA Technical Reports Server (NTRS)

    Harmon, B. A.; Wilson, C. A.; Fishman, G. J.; Connaughton, V.; Henze, W.; Paciesas, W. S.; Finger, M. H.; McCollough, M. L.; Sahi, M.; Peterson, B.

    2004-01-01

    The Burst and Transient Source Experiment (BATSE), aboard the Compton Gamma Ray Observatory (CGRO), provided a record of the low-energy gamma-ray sky (approx. 20-1000 keV) between 1991 April and 2000 May (9.1 yr). BATSE monitored the high-energy sky using the Earth occultation technique (EOT) for point sources whose emission extended for times on the order of the CGRO orbital period (approx. 92 min) or greater. Using the EOT to extract flux information, a catalog of sources using data from the BATSE Large Area Detectors has been prepared. The first part of the catalog consists of results from the all-sky monitoring of 58 sources, mostly Galactic, with intrinsic variability on timescales of hours to years. For these sources, we have included tables of flux and spectral data, and outburst times for transients. Light curves (or flux histories) have been placed on the World Wide Web. We then performed a deep sampling of these 58 objects, plus a selection of 121 more objects, combining data from the entire 9.1 yr BATSE data set. Source types considered were primarily accreting binaries, but a small number of representative active galaxies, X-ray-emitting stars, and supernova remnants were also included. The sample represents a compilation of sources monitored and/or discovered with BATSE and other high-energy instruments between 1991 and 2000, known sources taken from the HEAO 1 A-4 and Macomb & Gehrels catalogs. The deep sample results include definite detections of 83 objects and possible detections of 36 additional objects. The definite detections spanned three classes of sources: accreting black hole and neutron star binaries, active galaxies, and Supernova remnants. The average fluxes measured for the fourth class, the X-ray emitting stars, were below the confidence limit for definite detection.

  20. Restoration and Archiving of Data from the Plasma Composition Experiment on the International Sun-Earth Explorer One (ISEE 1)

    NASA Technical Reports Server (NTRS)

    Lennartsson, O. W.

    1997-01-01

    The objective of this project has been to complete the archiving of energetic (10 eV/epsilon - 18 keV/epsilon) ion composition data from the Lockheed Plasma Composition Experiment on the International Sun-Earth Explorer One (ISEE 1) satellite, using a particular data format that had previously been approved by NASA and the NSSDC. That same format, a combination of ion velocity moments and differential flux spectra, had been used in 1991 to archive, at the NSSDC, the first 28 months (the "Prime" period of ISEE investigations) of data from the Lockheed instrument under NASA Contract NAS5-33047. With the completion of this project, the almost 4 1/2-year time span of these unique data is now covered by a very compact set, approximately 1 gigabyte in total, of electronic files with physical quantities, all in ASCII. The files are organized by data type and time of data acquisition, in Universal Time, and named according to year and day of year. Each calendar day has five separate files (five types of data), the lengths of which vary from day to day, depending on the instrument mode of operation. The data format and file structure are described in detail in appendices 1 and 2. The physical medium consists of high-density (6250 cpi) 9-track magnetic tapes, complemented by a set of hardcopy line plots of certain plasma parameters. In this case there are five tapes, to be added to the six previous ones from 1991, and 25 booklets of plots, one per month, to be added to the previous 28. The tapes, including an extra standard-density (1600 cpi) tape with electronic versions of the Data User's Guide and self-guiding VAX/VMS command files, and the hardcopy plots are being boxed for shipment to the NSSDC.

  1. Constraints on Non-Newtonian Gravity From the Experiment on Neutron Quantum States in the Earth's Gravitational Field.

    PubMed

    Nesvizhevsky, V V; Protasov, K V

    2005-01-01

    An upper limit to non-Newtonian attractive forces is obtained from the measurement of quantum states of neutrons in the Earth's gravitational field. This limit improves the existing constraints in the nanometer range.

  2. Fault latency in the memory - An experimental study on VAX 11/780

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1986-01-01

    Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.

  3. Fault tectonics and earthquake hazards in parts of southern California. [penninsular ranges, Garlock fault, Salton Trough area, and western Mojave Desert

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.; Gazley, C., Jr.; Lamar, J. V.; Stratton, R. H.

    1976-01-01

    The author has identified the following significant results. Four previously unknown faults were discovered in basement terrane of the Peninsular Ranges. These have been named the San Ysidro Creek fault, Thing Valley fault, Canyon City fault, and Warren Canyon fault. In addition fault gouge and breccia were recognized along the San Diego River fault. Study of features on Skylab imagery and review of geologic and seismic data suggest that the risk of a damaging earthquake is greater along the northwestern portion of the Elsinore fault than along the southeastern portion. Physiographic indicators of active faulting along the Garlock fault identifiable in Skylab imagery include scarps, linear ridges, shutter ridges, faceted ridges, linear valleys, undrained depressions and offset drainage. The following previously unrecognized fault segments are postulated for the Salton Trough Area: (1) An extension of a previously known fault in the San Andreas fault set located southeast of the Salton Sea; (2) An extension of the active San Jacinto fault zone along a tonal change in cultivated fields across Mexicali Valley ( the tonal change may represent different soil conditions along opposite sides of a fault). For the Skylab and LANDSAT images studied, pseudocolor transformations offer no advantages over the original images in the recognition of faults in Skylab and LANDSAT images. Alluvial deposits of different ages, a marble unit and iron oxide gossans of the Mojave Mining District are more readily differentiated on images prepared from ratios of individual bands of the S-192 multispectral scanner data. The San Andreas fault was also made more distinct in the 8/2 and 9/2 band ratios by enhancement of vegetation differences on opposite sides of the fault. Preliminary analysis indicates a significant earth resources potential for the discrimination of soil and rock types, including mineral alteration zones. This application should be actively pursued.

  4. Displacement-length relationship of normal faults in Acheron Fossae, Mars: new observations with HRSC.

    NASA Astrophysics Data System (ADS)

    Charalambakis, E.; Hauber, E.; Knapmeyer, M.; Grott, M.; Gwinner, K.

    2007-08-01

    For Earth, data sets and models have shown that for a fault loaded by a constant remote stress, the maximum displacement on the fault is linearly related to its length by d = gamma · l [1]. The scaling and structure is self-similar through time [1]. The displacement-length relationship can provide useful information about the tectonic regime.We intend to use it to estimate the seismic moment released during the formation of Martian fault systems and to improve the seismicity model [2]. Only few data sets have been measured for extraterrestrial faults. One reason is the limited number of reliable topographic data sets. We used high-resolution Digital Elevation Models (DEM) [3] derived from HRSC image data taken from Mars Express orbit 1437. This orbit covers an area in the Acheron Fossae region, a rift-like graben system north of Olympus Mons with a "banana"-shaped topography [4]. It has a fault trend which runs approximately WNW-ESE. With an interactive IDL-based software tool [5] we measured the fault length and the vertical offset for 34 faults. We evaluated the height profile by plotting the fault lengths l vs. their observed maximum displacement (dmax-model). Additionally, we computed the maximum displacement of an elliptical fault scarp where the plane has the same area as in the observed case (elliptical model). The integration over the entire fault length necessary for the computation of the area supresses the "noise" introduced by local topographic effects like erosion or cratering. We should also mention that fault planes dipping 60 degree are usually assumed for Mars [e.g., 6] and even shallower dips have been found for normal fault planes [7]. This dip angle is used to compute displacement from vertical offset via d = h/(h*sinα), where h is the observed topographic step height, and ? is the fault dip angle. If fault dip angles of 30 degree are considered, the displacement differs by 40% from the one of dip angles of 60 degree. Depending on the data

  5. Laboratory Evidence of Strength Recovery of Healed Faults

    NASA Astrophysics Data System (ADS)

    Masuda, K.

    2015-12-01

    Fault zones consist of a fault core and a surrounding damage zone. Fault zones are typically characterized by the presence of many healed surfaces, the strength of which is unknown. If a healed fault recovers its strength such that its cohesion is equal to or greater than that of the host rock, repeated cycles of fracture and healing may be one mechanism producing wide fault zones. I present laboratory evidence supporting the strength recovery of healed fault surface, obtained by AE monitoring, strain measurements and X-ray CT techniques. The loading experiment was performed with a specimen collected from an exhumed fault zone. Healed surfaces of the rock sample were interpreted to be parallel to slip surfaces. The specimen was a cylinder with 50 mm diameter and 100 mm long. The long axis of the specimen was inclined with respect to the orientation of the healed surfaces. The compression test used a constant loading rate under 50 MPa of confining pressure. Macroscopic failure occurred when the applied differential stress reached 439 MPa. The macro-fracture surface created during the experiment was very close to the preexisting plane. The AE hypocenters closely match the locations of the preexisting healed surface and the new fault plane. The experiment also revealed details of the initial stage of fault development. The new fault zone developed near, but not precisely on the preexisting healed fault plane. An area of heterogeneous structure where stress appears to have concentrated, was where the AEs began, and it was also where the fracture started. This means that the healed surface was not a weak surface and that healing strengthened the fault such that its cohesion was equal to or greater than that of the intact host rock. These results suggest that repeated cycles of fracture and healing may be the main mechanism creating wide fault zones with multiple fault cores and damage zones.

  6. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  7. Computer modeling of high-voltage solar array experiment using the NASCAP/LEO (NASA Charging Analyzer Program/Low Earth Orbit) computer code

    NASA Astrophysics Data System (ADS)

    Reichl, Karl O., Jr.

    1987-06-01

    The relationship between the Interactions Measurement Payload for Shuttle (IMPS) flight experiment and the low Earth orbit plasma environment is discussed. Two interactions (parasitic current loss and electrostatic discharge on the array) may be detrimental to mission effectiveness. They result from the spacecraft's electrical potentials floating relative to plasma ground to achieve a charge flow equilibrium into the spacecraft. The floating potentials were driven by external biases applied to a solar array module of the Photovoltaic Array Space Power (PASP) experiment aboard the IMPS test pallet. The modeling was performed using the NASA Charging Analyzer Program/Low Earth Orbit (NASCAP/LEO) computer code which calculates the potentials and current collection of high-voltage objects in low Earth orbit. Models are developed by specifying the spacecraft, environment, and orbital parameters. Eight IMPS models were developed by varying the array's bias voltage and altering its orientation relative to its motion. The code modeled a typical low Earth equatorial orbit. NASCAP/LEO calculated a wide variety of possible floating potential and current collection scenarios. These varied directly with both the array bias voltage and with the vehicle's orbital orientation.

  8. Rainfall simulation experiments to study sediment redistribution using rare earth element oxides as tracers under conventional and conservation agricultural practices

    NASA Astrophysics Data System (ADS)

    Tóth, Adrienn; Jakab, Gergely; Sipos, Péter; Karlik, Máté; Madarász, Balázs; Zacháry, Dóra; Szabó, Judit; Szalai, Zoltán

    2017-04-01

    Rare earth elements (REE) have very favourable characteristics for being ideal sediment tracers as they are characterised by strong binding to soil particles, low mobility, low background concentration in soils, environmental benignity, high analytical sensitivity and they can be detected relatively easily and inexpensively in soils. The group of REEs consist of 16 elements with similar chemical properties, but at the same time, they are clearly distinguishable enabling multiple tracking of sediment deriving from different parts of the studied area, as well as mapping redistribution processes by appropriate designing of subareas marked by different REEs. In this study, rainfall simulation experiments were carried out to compare the loss and redistribution of soil sediments in two plots under conventional and conservation agricultural practices. Five different rainfall intensities (up to 80 mm/h) were applied to both plots. Sources and pathways of sediments within the two plots were studied using REE-oxides as tracers. Approximately 1,000 mg/kg of Er2O3, Ho2O3 and Sm2O3 (calculated to the upper 1 cm of the soil) were dispersed to the soil surface with banded distribution; each transversal band covered the third of the surface are of the plots. Concentration of the REE-oxides in the sediment leaving the plots, and that of the surface soil before and after the experiment were analysed by X-Ray fluorescence spectrometry. Significant sediment losses were found for both plots after the experiments, with slightly different characteristics between the conventional and conservation ones. Highest difference in loss of added REEs was found in the upper third of the plots with 81 ± 19% in the conventional and 71 ± 21% in the conservation ones. These values have been equalized downwards with almost complete losses in the lower third of the plots (99 ± 2% and 97 ± 4%, respectively). Only very small part of the removed sediment has been accumulated in the lower parts of the

  9. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  10. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  11. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  12. Transfer zones in listric normal fault systems

    NASA Astrophysics Data System (ADS)

    Bose, Shamik

    Listric normal faults are common in passive margin settings where sedimentary units are detached above weaker lithological units, such as evaporites or are driven by basal structural and stratigraphic discontinuities. The geometries and styles of faulting vary with the types of detachment and form landward and basinward dipping fault systems. Complex transfer zones therefore develop along the terminations of adjacent faults where deformation is accommodated by secondary faults, often below seismic resolution. The rollover geometry and secondary faults within the hanging wall of the major faults also vary with the styles of faulting and contribute to the complexity of the transfer zones. This study tries to understand the controlling factors for the formation of the different styles of listric normal faults and the different transfer zones formed within them, by using analog clay experimental models. Detailed analyses with respect to fault orientation, density and connectivity have been performed on the experiments in order to gather insights on the structural controls and the resulting geometries. A new high resolution 3D laser scanning technology has been introduced to scan the surfaces of the clay experiments for accurate measurements and 3D visualizations. Numerous examples from the Gulf of Mexico have been included to demonstrate and geometrically compare the observations in experiments and real structures. A salt cored convergent transfer zone from the South Timbalier Block 54, offshore Louisiana has been analyzed in detail to understand the evolutionary history of the region, which helps in deciphering the kinematic growth of similar structures in the Gulf of Mexico. The dissertation is divided into three chapters, written in a journal article format, that deal with three different aspects in understanding the listric normal fault systems and the transfer zones so formed. The first chapter involves clay experimental models to understand the fault patterns in

  13. Deconvolution and analysis of wide-angle longwave radiation data from Nimbus 6 Earth radiation budget experiment for the first year

    NASA Technical Reports Server (NTRS)

    Bess, T. D.; Green, R. N.; Smith, G. L.

    1980-01-01

    One year of longwave radiation data from July 1975 through June 1976 from the Nimbus 6 satellite Earth radiation budget experiment is analyzed by representing the radiation field by a spherical harmonic expansion. The data are from the wide field of view instrument. Contour maps of the longwave radiation field and spherical harmonic coefficients to degree 12 and order 12 are presented for a 12 month data period.

  14. Radiometer offsets and count conversion coefficients for the Earth Radiation Budget Experiment (ERBE) spacecraft for the years 1987, 1988, and 1989

    NASA Technical Reports Server (NTRS)

    Paden, Jack; Pandey, Dhirendra K.; Stassi, Joseph C.; Wilson, Robert; Bolden, William; Thomas, Susan; Gibson, M. Alan

    1993-01-01

    This document contains a compendium of the ground and in-flight scanner and non-scanner offsets and count conversion (gain) coefficients used for the Earth Radiation Budget Experiment (ERBE) production processing of data from the ERBS satellite for the period from 1 January 1987 to 31 December 1989; for the NOAA-9 satellite, for the month of January 1987; and for the NOAA-10 satellite, for the period from 1 January 1987 to 31 May 1989.

  15. Earth Science With the Stratospheric Aerosol and Gas Experiment III (SAGE III) on the International Space Station

    NASA Technical Reports Server (NTRS)

    Zawodny, Joe; Vernier, Jean-Paul; Thomason, Larry; Roell, Marilee; Pitts, Mike; Moore, Randy; Hill, Charles; Flittner, David; Damadeo, Rob; Cisewski, Mike

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) III is the fourth generation of solar occultation instruments operated by NASA, the first coming under a different acronym, to investigate the Earth's upper atmosphere. Three flight-ready SAGE III instruments were built by Ball Aerospace in the late 1990s, with one launched aboard the former Russian Aviation and Space Agency (now known as Roskosmos) Meteor-3M platform on 10 December 2001 (continuing until the platform lost power in 2006). Another of the original instruments was manifested for the ISS in the 2004 time frame, but was delayed because of budgetary considerations. Fortunately, that SAGE III/ISS mission was restarted in 2009 with a major focus upon filling an anticipated gap in ozone and aerosol observation in the second half of this decade. Here we discuss the mission architecture, its implementation, and data that will be produced by SAGE III/ISS, including their expected accuracy and coverage. The 52-degree inclined orbit of the ISS is well-suited for solar occultation and provides near-global observations on a monthly basis with excellent coverage of low and mid-latitudes. This is similar to that of the SAGE II mission (1985-2005), whose data set has served the international atmospheric science community as a standard for stratospheric ozone and aerosol measurements. The nominal science products include vertical profiles of trace gases, such as ozone, nitrogen dioxide and water vapor, along with multi-wavelength aerosol extinction. Though in the visible portion of the spectrum the brightness of the Sun is one million times that of the full Moon, the SAGE III instrument is designed to cover this large dynamic range and also perform lunar occultations on a routine basis to augment the solar products. The standard lunar products were demonstrated during the SAGE III/M3M mission and include ozone, nitrogen dioxide & nitrogen trioxide. The operational flexibility of the SAGE III spectrometer accomplishes

  16. Earth System Modeling and Field Experiments in the Arctic-Boreal Zone - Report from a NASA Workshop

    NASA Technical Reports Server (NTRS)

    Sellers, Piers; Rienecker Michele; Randall, David; Frolking, Steve

    2012-01-01

    Early climate modeling studies predicted that the Arctic Ocean and surrounding circumpolar land masses would heat up earlier and faster than other parts of the planet as a result of greenhouse gas-induced climate change, augmented by the sea-ice albedo feedback effect. These predictions have been largely borne out by observations over the last thirty years. However, despite constant improvement, global climate models have greater difficulty in reproducing the current climate in the Arctic than elsewhere and the scatter between projections from different climate models is much larger in the Arctic than for other regions. Biogeochemical cycle (BGC) models indicate that the warming in the Arctic-Boreal Zone (ABZ) could lead to widespread thawing of the permafrost, along with massive releases of CO2 and CH4, and large-scale changes in the vegetation cover in the ABZ. However, the uncertainties associated with these BGC model predictions are even larger than those associated with the physical climate system models used to describe climate change. These deficiencies in climate and BGC models reflect, at least in part, an incomplete understanding of the Arctic climate system and can be related to inadequate observational data or analyses of existing data. A workshop was held at NASA/GSFC, May 22-24 2012, to assess the predictive capability of the models, prioritize the critical science questions; and make recommendations regarding new field experiments needed to improve model subcomponents. This presentation will summarize the findings and recommendations of the workshop, including the need for aircraft and flux tower measurements and extension of existing in-situ measurements to improve process modeling of both the physical climate and biogeochemical cycle systems. Studies should be directly linked to remote sensing investigations with a view to scaling up the improved process models to the Earth System Model scale. Data assimilation and observing system simulation

  17. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  18. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  19. Study of fault-tolerant software technology

    NASA Technical Reports Server (NTRS)

    Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.

    1984-01-01

    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.

  20. Earth Observations

    2013-06-21

    ISS036-E-011034 (21 June 2013) --- The Salton Trough is featured in this image photographed by an Expedition 36 crew member on the International Space Station. The Imperial and Coachella Valleys of southern California – and the corresponding Mexicali Valley and Colorado River Delta in Mexico – are part of the Salton Trough, a large geologic structure known to geologists as a graben or rift valley that extends into the Gulf of California. The trough is a geologically complex zone formed by interaction of the San Andreas transform fault system that is, broadly speaking, moving southern California towards Alaska; and the northward motion of the Gulf of California segment of the East Pacific Rise that continues to widen the Gulf of California by sea-floor spreading. According to scientists, sediments deposited by the Colorado River have been filling the northern rift valley (the Salton Trough) for the past several million years, excluding the waters of the Gulf of California and providing a fertile environment – together with irrigation—for the development of extensive agriculture in the region (visible as green and yellow-brown fields at center). The Salton Sea, a favorite landmark of astronauts in low Earth orbit, was formed by an irrigation canal rupture in 1905, and today is sustained by agricultural runoff water. A wide array of varying landforms and land uses in the Salton Trough are visible from space. In addition to the agricultural fields and Salton Sea, easily visible metropolitan areas include Yuma, AZ (lower left); Mexicali, Baja California, Mexico (center); and the San Diego-Tijuana conurbation on the Pacific Coast (right). The approximately 72-kilometer-long Algodones Dunefield is visible at lower left.

  1. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  2. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  3. Practicing ESD at School: Integration of Formal and Nonformal Education Methods Based on the Earth Charter (Belarusian Experience)

    ERIC Educational Resources Information Center

    Savelava, Sofia; Savelau, Dmitry; Cary, Marina Bakhnova

    2010-01-01

    The Earth Charter represents the philosophy and ethics necessary to create a new period of human civilization. Understanding and adoption of this new vision is the most important mission of education for sustainable development (ESD). This article argues that for successful implementation of ESD principles at school, the school education system…

  4. Urban Fifth Graders' Connections-Making between Formal Earth Science Content and Their Lived Experiences

    ERIC Educational Resources Information Center

    Brkich, Katie Lynn

    2014-01-01

    Earth science education, as it is traditionally taught, involves presenting concepts such as weathering, erosion, and deposition using relatively well-known examples--the Grand Canyon, beach erosion, and others. However, these examples--which resonate well with middle- and upper-class students--ill-serve students of poverty attending urban schools…

  5. Earth Science

    1996-01-31

    The Near Earth Asteroid Rendezvous (NEAR) spacecraft embarks on a journey that will culminate in a close encounter with an asteroid. The launch of NEAR inaugurates NASA's irnovative Discovery program of small-scale planetary missions with rapid, lower-cost development cycles and focused science objectives. NEAR will rendezvous in 1999 with the asteroid 433 Eros to begin the first long-term, close-up look at an asteroid's surface composition and physical properties. NEAR's science payload includes an x-ray/gamma ray spectrometer, an near-infrared spectrograph, a laser rangefinder, a magnetometer, a radio science experiment and a multi-spectral imager.

  6. Strike-slip fault propagation and linkage via work optimization with application to the San Jacinto fault, California

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; McBeck, J.; Cooke, M. L.

    2013-12-01

    Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in

  7. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  8. Block rotations, fault domains and crustal deformation in the western US

    NASA Technical Reports Server (NTRS)

    Nur, Amos

    1990-01-01

    The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.

  9. Alaska's Secondary Science Teachers and Students Receive Earth Systems Science Knowledge, GIS Know How and University Technical Support for Pre- College Research Experiences: The EDGE Project

    NASA Astrophysics Data System (ADS)

    Connor, C. L.; Prakash, A.

    2007-12-01

    Alaska's secondary school teachers are increasingly required to provide Earth systems science (ESS) education that integrates student observations of local natural processes related to rapid climate change with geospatial datasets and satellite imagery using Geographic Information Systems (GIS) technology. Such skills are also valued in various employment sectors of the state where job opportunities requiring Earth science and GIS training are increasing. University of Alaska's EDGE (Experiential Discoveries in Geoscience Education) program has provided training and classroom resources for 3 cohorts of inservice Alaska science and math teachers in GIS and Earth Systems Science (2005-2007). Summer workshops include geologic field experiences, GIS instruction, computer equipment and technical support for groups of Alaska high school (HS) and middle school (MS) science teachers each June and their students in August. Since 2005, EDGE has increased Alaska science and math teachers' Earth science content knowledge and developed their GIS and computer skills. In addition, EDGE has guided teachers using a follow-up, fall online course that provided more extensive ESS knowledge linked with classroom standards and provided course content that was directly transferable into their MS and HS science classrooms. EDGE teachers were mentored by University faculty and technical staff as they guided their own students through semester-scale, science fair style projects using geospatial data that was student- collected. EDGE program assessment indicates that all teachers have improved their ESS knowledge, GIS knowledge, and the use of technology in their classrooms. More than 230 middle school students have learned GIS, from EDGE teachers and 50 EDGE secondary students have conducted original research related to landscape change and its impacts on their own communities. Longer-term EDGE goals include improving student performance on the newly implemented (spring 2008) 10th grade

  10. Postglacial rebound and fault instability in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Wu, Patrick; Johnston, Paul; Lambeck, Kurt

    1999-12-01

    The best available rebound model is used to investigate the role that postglacial rebound plays in triggering seismicity in Fennoscandia. The salient features of the model include tectonic stress due to spreading at the North Atlantic Ridge, overburden pressure, gravitationally self-consistent ocean loading, and the realistic deglaciation history and compressible earth model which best fits the sea-level and ice data in Fennoscandia. The model predicts the spatio-temporal evolution of the state of stress, the magnitude of fault instability, the timing of the onset of this instability, and the mode of failure of lateglacial and postglacial seismicity. The consistency of the predictions with the observations suggests that postglacial rebound is probably the cause of the large postglacial thrust faults observed in Fennoscandia. The model also predicts a uniform stress field and instability in central Fennoscandia for the present, with thrust faulting as the predicted mode of failure. However, the lack of spatial correlation of the present seismicity with the region of uplift, and the existence of strike-slip and normal modes of current seismicity are inconsistent with this model. Further unmodelled factors such as the presence of high-angle faults in the central region of uplift along the Baltic coast would be required in order to explain the pattern of seismicity today in terms of postglacial rebound stress. The sensitivity of the model predictions to the effects of compressibility, tectonic stress, viscosity and ice model is also investigated. For sites outside the ice margin, it is found that the mode of failure is sensitive to the presence of tectonic stress and that the onset timing is also dependent on compressibility. For sites within the ice margin, the effect of Earth rheology is shown to be small. However, ice load history is shown to have larger effects on the onset time of earthquakes and the magnitude of fault instability.

  11. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  12. Frictional and hydraulic behaviour of carbonate fault gouge during fault reactivation - An experimental study

    NASA Astrophysics Data System (ADS)

    Delle Piane, Claudio; Giwelli, Ausama; Clennell, M. Ben; Esteban, Lionel; Nogueira Kiewiet, Melissa Cristina D.; Kiewiet, Leigh; Kager, Shane; Raimon, John

    2016-10-01

    We present a novel experimental approach devised to test the hydro-mechanical behaviour of different structural elements of carbonate fault rocks during experimental re-activation. Experimentally faulted core plugs were subject to triaxial tests under water saturated conditions simulating depletion processes in reservoirs. Different fault zone structural elements were created by shearing initially intact travertine blocks (nominal size: 240 × 110 × 150 mm) to a maximum displacement of 20 and 120 mm under different normal stresses. Meso-and microstructural features of these sample and the thickness to displacement ratio characteristics of their deformation zones allowed to classify them as experimentally created damage zones (displacement of 20 mm) and fault cores (displacement of 120 mm). Following direct shear testing, cylindrical plugs with diameter of 38 mm were drilled across the slip surface to be re-activated in a conventional triaxial configuration monitoring the permeability and frictional behaviour of the samples as a function of applied stress. All re-activation experiments on faulted plugs showed consistent frictional response consisting of an initial fast hardening followed by apparent yield up to a friction coefficient of approximately 0.6 attained at around 2 mm of displacement. Permeability in the re-activation experiments shows exponential decay with increasing mean effective stress. The rate of permeability decline with mean effective stress is higher in the fault core plugs than in the simulated damage zone ones. It can be concluded that the presence of gouge in un-cemented carbonate faults results in their sealing character and that leakage cannot be achieved by renewed movement on the fault plane alone, at least not within the range of slip measureable with our apparatus (i.e. approximately 7 mm of cumulative displacement). Additionally, it is shown that under sub seismic slip rates re-activated carbonate faults remain strong and no frictional

  13. Fault detection and diagnosis of photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Wu, Xing

    The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.

  14. Hayward Fault, California Interferogram

    2000-08-17

    This image of California Hayward fault is an interferogram created using a pair of images taken by ESA ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

  15. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  16. Theoretical constraints on dynamic pulverization of fault zone rocks

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Ben-Zion, Yehuda

    2017-04-01

    We discuss dynamic rupture results aiming to elucidate the generation mechanism of pulverized fault zone rocks (PFZR) observed in 100-200 m wide belts distributed asymmetrically across major strike-slip faults separating different crustal blocks. Properties of subshear and supershear ruptures are considered using analytical results of Linear Elastic Fracture Mechanics and numerical simulations of Mode-II ruptures along faults between similar or dissimilar solids. The dynamic fields of bimaterial subshear ruptures are expected to produce off-fault damage primarily on the stiff side of the fault, with tensile cracks having no preferred orientation, in agreement with field observations. Subshear ruptures in a homogeneous solid are expected to produce off-fault damage with high-angle tensile cracks on the extensional side of the fault, while supershear ruptures between similar or dissimilar solids are likely to produce off-fault damage on both sides of the fault with preferred tensile crack orientations. One or more of these features are not consistent with properties of natural samples of PFZR. At a distance of about 100 m from the fault, subshear and supershear ruptures without stress singularities produce strain rates up to 1 s-1. This is less than required for rock pulverization in laboratory experiments with centimetre-scale intact rock samples, but may be sufficient for pulverizing larger samples with pre-existing damage.

  17. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  18. Modeling of a latent fault detector in a digital system

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.

    1978-01-01

    Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.

  19. Achieving Agreement in Three Rounds With Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  20. Creating an automated chiller fault detection and diagnostics tool using a data fault library.

    PubMed

    Bailey, Margaret B; Kreider, Jan F

    2003-07-01

    Reliable, automated detection and diagnosis of abnormal behavior within vapor compression refrigeration cycle (VCRC) equipment is extremely desirable for equipment owners and operators. The specific type of VCRC equipment studied in this paper is a 70-ton helical rotary, air-cooled chiller. The fault detection and diagnostic (FDD) tool developed as part of this research analyzes chiller operating data and detects faults through recognizing trends or patterns existing within the data. The FDD method incorporates a neural network (NN) classifier to infer the current state given a vector of observables. Therefore the FDD method relies upon the availability of normal and fault empirical data for training purposes and therefore a fault library of empirical data is assembled. This paper presents procedures for conducting sophisticated fault experiments on chillers that simulate air-cooled condenser, refrigerant, and oil related faults. The experimental processes described here are not well documented in literature and therefore will provide the interested reader with a useful guide. In addition, the authors provide evidence, based on both thermodynamics and empirical data analysis, that chiller performance is significantly degraded during fault operation. The chiller's performance degradation is successfully detected and classified by the NN FDD classifier as discussed in the paper's final section.

  1. Design of Community Resource Inventories as a Component of Scalable Earth Science Infrastructure: Experience of the Earthcube CINERGI Project

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Valentine, D. W., Jr.; Grethe, J. S.; Hsu, L.; Malik, T.; Bermudez, L. E.; Gupta, A.; Lehnert, K. A.; Whitenack, T.; Ozyurt, I. B.; Condit, C.; Calderon, R.; Musil, L.

    2014-12-01

    EarthCube is envisioned as a cyberinfrastructure that fosters new, transformational geoscience by enabling sharing, understanding and scientifically-sound and efficient re-use of formerly unconnected data resources, software, models, repositories, and computational power. Its purpose is to enable science enterprise and workforce development via an extensible and adaptable collaboration and resource integration framework. A key component of this vision is development of comprehensive inventories supporting resource discovery and re-use across geoscience domains. The goal of the EarthCube CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project is to create a methodology and assemble a large inventory of high-quality information resources with standard metadata descriptions and traceable provenance. The inventory is compiled from metadata catalogs maintained by geoscience data facilities, as well as from user contributions. The latter mechanism relies on community resource viewers: online applications that support update and curation of metadata records. Once harvested into CINERGI, metadata records from domain catalogs and community resource viewers are loaded into a staging database implemented in MongoDB, and validated for compliance with ISO 19139 metadata schema. Several types of metadata defects detected by the validation engine are automatically corrected with help of several information extractors or flagged for manual curation. The metadata harvesting, validation and processing components generate provenance statements using W3C PROV notation, which are stored in a Neo4J database. Thus curated metadata, along with the provenance information, is re-published and accessed programmatically and via a CINERGI online application. This presentation focuses on the role of resource inventories in a scalable and adaptable information infrastructure, and on the CINERGI metadata pipeline and its implementation challenges. Key project

  2. Fault geometric complexity and how it may cause temporal slip-rate variation within an interacting fault system

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; Arrowsmith, Ramon

    2010-05-01

    observed in laboratory friction experiments and expressed in an [a-b] term in Rate-State-Friction (RSF) theory. Patches in the seismic zone are incrementally loaded during the interseismic phase. An earthquake initiates if shear stress along at least one (seismic) patch exceeds its static frictional strength and may grow in size due to elastic interaction with other fault patches (static stress transfer). Aside from investigating slip-rate variations due to the elastic interactions within a fault system with this tool, we want to show how such modeling results can be very useful in exploring the physics underlying the patterns that the paleoseismology sees and that those methods (simulation and observations) can be merged, with both making important contributions. Using FIMozFric, we generated synthetic seismic records for a large number of fault geometries and structural scenarios to investigate along-fault slip accumulation patterns and the variability of slip at a point. Our simulations show that fault geometric complexity and the accompanied fault interactions and multi-fault ruptures may cause temporal deviations from the average fault slip-rate, in other words phases of earthquake clustering or relative quiescence. Slip-rates along faults within an interacting fault system may change even when the loading function (stressing rate) remains constant and the magnitude of slip rate change is suggested to be proportional to the magnitude of fault interaction. Thus, spatially isolated and structurally mature faults are expected to experience less slip-rate changes than strongly interacting and less mature faults. The magnitude of slip-rate change may serve as a proxy for the magnitude of fault interaction and vice versa.

  3. The large area crop inventory experiment: An experiment to demonstrate how space-age technology can contribute to solving critical problems here on earth

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The large area crop inventory experiment is being developed to predict crop production through satellite photographs. This experiment demonstrates how space age technology can contribute to solving practical problems of agriculture management.

  4. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network.

    PubMed

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-07-12

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.

  5. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Yang, Zhenwei; Kang, Mei

    2018-01-01

    This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

  6. The Bear River Fault Zone, Wyoming and Utah: Complex Ruptures on a Young Normal Fault

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Hecker, S.; Haproff, P.; Beukelman, G.; Erickson, B.

    2012-12-01

    The Bear River fault zone (BRFZ), a set of normal fault scarps located in the Rocky Mountains at the eastern margin of Basin and Range extension, is a rare example of a nascent surface-rupturing fault. Paleoseismic investigations (West, 1994; this study) indicate that the entire neotectonic history of the BRFZ may consist of two large surface-faulting events in the late Holocene. We have estimated a maximum per-event vertical displacement of 6-6.5 m at the south end of the fault where it abuts the north flank of the east-west-trending Uinta Mountains. However, large hanging-wall depressions resulting from back rotation, which front scarps that locally exceed 15 m in height, are prevalent along the main trace, obscuring the net displacement and its along-strike distribution. The modest length (~35 km) of the BRFZ indicates ruptures with a large displacement-to-length ratio, which implies earthquakes with a high static stress drop. The BRFZ is one of several immature (low cumulative displacement) normal faults in the Rocky Mountain region that appear to produce high-stress drop earthquakes. West (1992) interpreted the BRFZ as an extensionally reactivated ramp of the late Cretaceous-early Tertiary Hogsback thrust. LiDAR data on the southern section of the fault and Google Earth imagery show that these young ruptures are more extensive than currently mapped, with newly identified large (>10m) antithetic scarps and footwall graben. The scarps of the BRFZ extend across a 2.5-5.0 km-wide zone, making this the widest and most complex Holocene surface rupture in the Intermountain West. The broad distribution of Late Holocene scarps is consistent with reactivation of shallow bedrock structures but the overall geometry of the BRFZ at depth and its extent into the seismogenic zone are uncertain.

  7. NASA Earth Day 2014

    2014-04-22

    NASA's Administrator, Charles Bolden, conducts an experiment using circuits at NASA's Earth Day event. The event took place at Union Station in Washington, DC on April 22, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  8. NASA Earth Day 2014

    2014-04-22

    NASA's Administrator, Charles Bolden watches as some students conduct an experiment with a balloon at NASA's Earth Day event. The event took place at Union Station in Washington, DC on April 22, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  9. NASA Earth Day 2014

    2014-04-22

    Students listen intently while an exhibitor conducts an experiment at NASA's Earth Day event. The event took place at Union Station in Washington, DC on April 22, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  10. Earth Observation

    2011-07-06

    ISS028-E-014782 (6 July 2011) --- The Shoemaker (formerly Teague) Impact Structure, located in Western Australia in a drainage basin south of the Waldburg Range, presents an other-worldly appearance in this detailed photograph recorded from onboard the International Space Station on July 6. The Shoemaker impact site is approximately 30 kilometers in diameter, and is clearly defined by concentric ring structures formed in sedimentary rocks (brown to dark brown, image center) that were deformed by the impact event approximately 1630 million years ago, according to the Earth Impact Database. Several saline and ephemeral lakes?Nabberu, Teague, Shoemaker, and numerous smaller ponds?occupy the land surface between the concentric ring structures. Differences in color result from both water depth and suspended sediments, with some bright salt crusts visible around the edges of smaller ponds (image center The Teague Impact Structure was renamed Shoemaker in honor of the late Dr. Eugene M. Shoemaker, a pioneer in the field of impact crater studies and planetary geology, and founder of the Astrogeology Branch of the United States Geological Survey. The image was recorded with a digital still camera using a 200 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science & Analysis Laboratory, Johnson Space Center.

  11. 3D features of delayed thermal convection in fault zones: consequences for deep fluid processes in the Tiberias Basin, Jordan Rift Valley

    NASA Astrophysics Data System (ADS)

    Magri, Fabien; Möller, Sebastian; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Kühn, Michael

    2015-04-01

    simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520(0), 342-355. Murphy, H.D., 1979. Convective instabilities in vertical fractures and faults. Journal of Geophysical Research: Solid Earth, 84(B11), 6121-6130. Tournier, C., Genthon, P., Rabinowicz, M., 2000. The onset of natural convection in vertical fault planes: consequences for the thermal regime in crystalline basementsand for heat recovery experiments. Geophysical Journal International, 140(3), 500-508.

  12. The Talas-Fergana Fault, Kirghiz and Kazakh, USSR

    Wallace, R.E.

    1976-01-01

    The great Talas-Fergana fault transects the Soviet republic of Kirghiz in Soviet Central Asia and extends southeastward into China and northwestward into Kazakh SSR (figs. 1 and 2). This great rupture in the Earth's crust rivals the San Andreas fault in California; it is long (approximately 900 kilometers), complex, and possibly has a lateral displacement of hundreds of kilometers similar to that on the San Andreas fault. The Soviet geologist V. S. Burtman suggested that right-lateral offset of 250 kilometers has occurred, citing a shift of Devonian rocks as evidence (fig. 3). By no means do all Soviet geologists agree. Some hold the view that there is no lateral displacement along the Talas-Fergana fault and that the anomalous distribution of Paleozoic rocks is a result of the original position of deposition. 

  13. The San Andreas Fault and a Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The mosaic on the right of the south polar region of Jupiter's moon Europa shows the northern 290 kilometers (180 miles) of a strike-slip fault named Astypalaea Linea. The entire fault is about 810 kilometers (500 miles) long, the size of the California portion of the San Andreas fault on Earth which runs from the California-Mexico border north to the San Francisco Bay.

    The left mosaic shows the portion of the San Andreas fault near California's san Francisco Bay that has been scaled to the same size and resolution as the Europa image. Each covers an area approximately 170 by 193 kilometers(105 by 120 miles). The red line marks the once active central crack of the Europan fault (right) and the line of the San Andreas fault (left).

    A strike-slip fault is one in which two crustal blocks move horizontally past one another, similar to two opposing lanes of traffic. The overall motion along the Europan fault seems to have followed a continuous narrow crack along the entire length of the feature, with a path resembling stepson a staircase crossing zones which have been pulled apart. The images show that about 50 kilometers (30 miles) of displacement have taken place along the fault. Opposite sides of the fault can be reconstructed like a puzzle, matching the shape of the sides as well as older individual cracks and ridges that had been broken by its movements.

    Bends in the Europan fault have allowed the surface to be pulled apart. This pulling-apart along the fault's bends created openings through which warmer, softer ice from below Europa's brittle ice shell surface, or frozen water from a possible subsurface ocean, could reach the surface. This upwelling of material formed large areas of new ice within the boundaries of the original fault. A similar pulling apart phenomenon can be observed in the geological trough surrounding California's Salton Sea, and in Death Valley and the Dead Sea. In those cases, the pulled apart regions can include upwelled

  14. The Denali EarthScope Education Partnership: Creating Opportunities for Learning About Solid Earth Processes in Alaska and Beyond.

    NASA Astrophysics Data System (ADS)

    Roush, J. J.; Hansen, R. A.

    2003-12-01

    The Geophysical Institute of the University of Alaska Fairbanks, in partnership with Denali National Park and Preserve, has begun an education outreach program that will create learning opportunities in solid earth geophysics for a wide sector of the public. We will capitalize upon a unique coincidence of heightened public interest in earthquakes (due to the M 7.9 Denali Fault event of Nov. 3rd, 2002), the startup of the EarthScope experiment, and the construction of the Denali Science & Learning Center, a premiere facility for science education located just 43 miles from the epicenter of the Denali Fault earthquake. Real-time data and current research results from EarthScope installations and science projects in Alaska will be used to engage students and teachers, national park visitors, and the general public in a discovery process that will enhance public understanding of tectonics, seismicity and volcanism along the boundary between the Pacific and North American plates. Activities will take place in five program areas, which are: 1) museum displays and exhibits, 2) outreach via print publications and electronic media, 3) curriculum development to enhance K-12 earth science education, 4) teacher training to develop earth science expertise among K-12 educators, and 5) interaction between scientists and the public. In order to engage the over 1 million annual visitors to Denali, as well as people throughout Alaska, project activities will correspond with the opening of the Denali Science and Learning Center in 2004. An electronic interactive kiosk is being constructed to provide public access to real-time data from seismic and geodetic monitoring networks in Alaska, as well as cutting edge visualizations of solid earth processes. A series of print publications and a website providing access to real-time seismic and geodetic data will be developed for park visitors and the general public, highlighting EarthScope science in Alaska. A suite of curriculum modules

  15. Geotribology - Friction, wear, and lubrication of faults

    NASA Astrophysics Data System (ADS)

    Boneh, Yuval; Reches, Ze'ev

    2018-05-01

    We introduce here the concept of Geotribology as an approach to study friction, wear, and lubrication of geological systems. Methods of geotribology are applied here to characterize the friction and wear associated with slip along experimental faults composed of brittle rocks. The wear in these faults is dominated by brittle fracturing, plucking, scratching and fragmentation at asperities of all scales, including 'effective asperities' that develop and evolve during the slip. We derived a theoretical model for the rate of wear based on the observation that the dynamic strength of brittle materials is proportional to the product of load stress and loading period. In a slipping fault, the loading period of an asperity is inversely proportional to the slip velocity, and our derivations indicate that the wear-rate is proportional to the ratio of [shear-stress/slip-velocity]. By incorporating the rock hardness data into the model, we demonstrate that a single, universal function fits wear data of hundreds of experiments with granitic, carbonate and sandstone faults. In the next step, we demonstrate that the dynamic frictional strength of experimental faults is well explained in terms of the tribological parameter PV factor (= normal-stress · slip-velocity). This factor successfully delineates weakening and strengthening regimes of carbonate and granitic faults. Finally, our analysis revealed a puzzling observation that wear-rate and frictional strength have strikingly different dependencies on the loading conditions of normal-stress and slip-velocity; we discuss sources for this difference. We found that utilization of tribological tools in fault slip analyses leads to effective and insightful results.

  16. Immunity-Based Aircraft Fault Detection System

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.

  17. Earth Observation

    2014-06-01

    ISS040-E-006327 (1 June 2014) --- A portion of International Space Station solar array panels and Earth?s horizon are featured in this image photographed by an Expedition 40 crew member on the space station.

  18. Geophysical Characterization of the Hilton Creek Fault System

    NASA Astrophysics Data System (ADS)

    Lacy, A. K.; Macy, K. P.; De Cristofaro, J. L.; Polet, J.

    2016-12-01

    The Long Valley Caldera straddles the eastern edge of the Sierra Nevada Batholith and the western edge of the Basin and Range Province, and represents one of the largest caldera complexes on Earth. The caldera is intersected by numerous fault systems, including the Hartley Springs Fault System, the Round Valley Fault System, the Long Valley Ring Fault System, and the Hilton Creek Fault System, which is our main region of interest. The Hilton Creek Fault System appears as a single NW-striking fault, dipping to the NE, from Davis Lake in the south to the southern rim of the Long Valley Caldera. Inside the caldera, it splays into numerous parallel faults that extend toward the resurgent dome. Seismicity in the area increased significantly in May 1980, following a series of large earthquakes in the vicinity of the caldera and a subsequent large earthquake swarm which has been suggested to be the result of magma migration. A large portion of the earthquake swarms in the Long Valley Caldera occurs on or around the Hilton Creek Fault splays. We are conducting an interdisciplinary geophysical study of the Hilton Creek Fault System from just south of the onset of splay faulting, to its extension into the dome of the caldera. Our investigation includes ground-based magnetic field measurements, high-resolution total station elevation profiles, Structure-From-Motion derived topography and an analysis of earthquake focal mechanisms and statistics. Preliminary analysis of topographic profiles, of approximately 1 km in length, reveals the presence of at least three distinct fault splays within the caldera with vertical offsets of 0.5 to 1.0 meters. More detailed topographic mapping is expected to highlight smaller structures. We are also generating maps of the variation in b-value along different portions of the Hilton Creek system to determine whether we can detect any transition to more swarm-like behavior towards the North. We will show maps of magnetic anomalies, topography

  19. Illuminating Northern California’s Active Faults

    Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramon; Furlong, Kevin P.; Philips, David A.

    2009-01-01

    Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google EarthTM and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2)

  20. Fault Modeling of Extreme Scale Applications Using Machine Learning

    SciT

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  1. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE PAGES

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  2. Digital release of the Alaska Quaternary fault and fold database

    NASA Astrophysics Data System (ADS)

    Koehler, R. D.; Farrell, R.; Burns, P.; Combellick, R. A.; Weakland, J. R.

    2011-12-01

    The Alaska Division of Geological & Geophysical Surveys (DGGS) has designed a Quaternary fault and fold database for Alaska in conformance with standards defined by the U.S. Geological Survey for the National Quaternary fault and fold database. Alaska is the most seismically active region of the United States, however little information exists on the location, style of deformation, and slip rates of Quaternary faults. Thus, to provide an accurate, user-friendly, reference-based fault inventory to the public, we are producing a digital GIS shapefile of Quaternary fault traces and compiling summary information on each fault. Here, we present relevant information pertaining to the digital GIS shape file and online access and availability of the Alaska database. This database will be useful for engineering geologic studies, geologic, geodetic, and seismic research, and policy planning. The data will also contribute to the fault source database being constructed by the Global Earthquake Model (GEM), Faulted Earth project, which is developing tools to better assess earthquake risk. We derived the initial list of Quaternary active structures from The Neotectonic Map of Alaska (Plafker et al., 1994) and supplemented it with more recent data where available. Due to the limited level of knowledge on Quaternary faults in Alaska, pre-Quaternary fault traces from the Plafker map are shown as a layer in our digital database so users may view a more accurate distribution of mapped faults and to suggest the possibility that some older traces may be active yet un-studied. The database will be updated as new information is developed. We selected each fault by reviewing the literature and georegistered the faults from 1:250,000-scale paper maps contained in 1970's vintage and earlier bedrock maps. However, paper map scales range from 1:20,000 to 1:500,000. Fault parameters in our GIS fault attribute tables include fault name, age, slip rate, slip sense, dip direction, fault line type

  3. DIFFERENTIAL FAULT SENSING CIRCUIT

    DOEpatents

    Roberts, J.H.

    1961-09-01

    A differential fault sensing circuit is designed for detecting arcing in high-voltage vacuum tubes arranged in parallel. A circuit is provided which senses differences in voltages appearing between corresponding elements likely to fault. Sensitivity of the circuit is adjusted to some level above which arcing will cause detectable differences in voltage. For particular corresponding elements, a group of pulse transformers are connected in parallel with diodes connected across the secondaries thereof so that only voltage excursions are transmitted to a thyratron which is biased to the sensitivity level mentioned.

  4. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  5. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  6. NASA ground terminal communication equipment automated fault isolation expert systems

    NASA Technical Reports Server (NTRS)

    Tang, Y. K.; Wetzel, C. R.

    1990-01-01

    The prototype expert systems are described that diagnose the Distribution and Switching System I and II (DSS1 and DSS2), Statistical Multiplexers (SM), and Multiplexer and Demultiplexer systems (MDM) at the NASA Ground Terminal (NGT). A system level fault isolation expert system monitors the activities of a selected data stream, verifies that the fault exists in the NGT and identifies the faulty equipment. Equipment level fault isolation expert systems are invoked to isolate the fault to a Line Replaceable Unit (LRU) level. Input and sometimes output data stream activities for the equipment are available. The system level fault isolation expert system compares the equipment input and output status for a data stream and performs loopback tests (if necessary) to isolate the faulty equipment. The equipment level fault isolation system utilizes the process of elimination and/or the maintenance personnel's fault isolation experience stored in its knowledge base. The DSS1, DSS2 and SM fault isolation systems, using the knowledge of the current equipment configuration and the equipment circuitry issues a set of test connections according to the predefined rules. The faulty component or board can be identified by the expert system by analyzing the test results. The MDM fault isolation system correlates the failure symptoms with the faulty component based on maintenance personnel experience. The faulty component can be determined by knowing the failure symptoms. The DSS1, DSS2, SM, and MDM equipment simulators are implemented in PASCAL. The DSS1 fault isolation expert system was converted to C language from VP-Expert and integrated into the NGT automation software for offline switch diagnoses. Potentially, the NGT fault isolation algorithms can be used for the DSS1, SM, amd MDM located at Goddard Space Flight Center (GSFC).

  7. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  8. Experiments to be flown in an Earth orbiting laboratory: The US experiments on the first international microgravity laboratory, from concept to flight

    NASA Technical Reports Server (NTRS)

    Winget, C. M.; Callahan, P. X.; Schaefer, R. L.; Lashbrook, J. J.

    1992-01-01

    The current life cycle of NASA ARC-managed flight experiments is presented. The two main purposes are: (1) to bring to the attention of biologists, and in particular cell and plant biologists, some of the requirements for flying a life science experiment in space; and (2) to introduce the subject to biologists embarking on studies in the field and to delineate some of the specific requirements that will be encountered by an ARC-managed microgravity experiment. This is not intended to be an exhaustive encyclopedia of all techniques used to prepare an experiment to evaluate the effect of microgravity on plant and animal cells. However, many of the requirements are the same for all biological systems and for other NASA centers. Emphasis is on the principle investigator's (PI's) involvement in the activities required for successful completion of major reviews. The PI support required for activities other than these reviews is also discussed, as are the interactions between ARC and the PI that will be required as problems or questions arise throughout experiment and payload development. It is impossible to predict the extent of this activity because it varies according to the complexity of the experiment and the flight experience of the PI.

  9. Local precision nets for monitoring movements of faults and large engineering structures

    NASA Technical Reports Server (NTRS)

    Henneberg, H. G.

    1978-01-01

    Along Bocono Fault were installed local high precision geodetic nets to observe the possible horizontal crustal deformations and movements. In the fault area there are few big structures which are also included in the mentioned investigation. In the near future, measurements shall be extended to other sites of Bocono Fault and also to the El Pilar Fault. In the same way and by similar methods high precision geodetic nets are applied in Venezuela to observe the behavior of big structures, as bridges and large dams and of earth surface deformations due to industrial activities.

  10. Fault rheology beyond frictional melting.

    PubMed

    Lavallée, Yan; Hirose, Takehiro; Kendrick, Jackie E; Hess, Kai-Uwe; Dingwell, Donald B

    2015-07-28

    During earthquakes, comminution and frictional heating both contribute to the dissipation of stored energy. With sufficient dissipative heating, melting processes can ensue, yielding the production of frictional melts or "pseudotachylytes." It is commonly assumed that the Newtonian viscosities of such melts control subsequent fault slip resistance. Rock melts, however, are viscoelastic bodies, and, at high strain rates, they exhibit evidence of a glass transition. Here, we present the results of high-velocity friction experiments on a well-characterized melt that demonstrate how slip in melt-bearing faults can be governed by brittle fragmentation phenomena encountered at the glass transition. Slip analysis using models that incorporate viscoelastic responses indicates that even in the presence of melt, slip persists in the solid state until sufficient heat is generated to reduce the viscosity and allow remobilization in the liquid state. Where a rock is present next to the melt, we note that wear of the crystalline wall rock by liquid fragmentation and agglutination also contributes to the brittle component of these experimentally generated pseudotachylytes. We conclude that in the case of pseudotachylyte generation during an earthquake, slip even beyond the onset of frictional melting is not controlled merely by viscosity but rather by an interplay of viscoelastic forces around the glass transition, which involves a response in the brittle/solid regime of these rock melts. We warn of the inadequacy of simple Newtonian viscous analyses and call for the application of more realistic rheological interpretation of pseudotachylyte-bearing fault systems in the evaluation and prediction of their slip dynamics.

  11. Measurement of fault latency in a digital avionic miniprocessor

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Swern, F. L.

    1981-01-01

    The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are presented. The failure detection coverage of comparison-monitoring and a typical avionics CPU self-test program was determined. The specific tasks and experiments included: (1) inject randomly selected gate-level and pin-level faults and emulate six software programs using comparison-monitoring to detect the faults; (2) based upon the derived empirical data develop and validate a model of fault latency that will forecast a software program's detecting ability; (3) given a typical avionics self-test program, inject randomly selected faults at both the gate-level and pin-level and determine the proportion of faults detected; (4) determine why faults were undetected; (5) recommend how the emulation can be extended to multiprocessor systems such as SIFT; and (6) determine the proportion of faults detected by a uniprocessor BIT (built-in-test) irrespective of self-test.

  12. Fault management and systems knowledge

    DOT National Transportation Integrated Search

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  13. On-the-fly machine-learning for high-throughput experiments: search for rare-earth-free permanent magnets

    PubMed Central

    Kusne, Aaron Gilad; Gao, Tieren; Mehta, Apurva; Ke, Liqin; Nguyen, Manh Cuong; Ho, Kai-Ming; Antropov, Vladimir; Wang, Cai-Zhuang; Kramer, Matthew J.; Long, Christian; Takeuchi, Ichiro

    2014-01-01

    Advanced materials characterization techniques with ever-growing data acquisition speed and storage capabilities represent a challenge in modern materials science, and new procedures to quickly assess and analyze the data are needed. Machine learning approaches are effective in reducing the complexity of data and rapidly homing in on the underlying trend in multi-dimensional data. Here, we show that by employing an algorithm called the mean shift theory to a large amount of diffraction data in high-throughput experimentation, one can streamline the process of delineating the structural evolution across compositional variations mapped on combinatorial libraries with minimal computational cost. Data collected at a synchrotron beamline are analyzed on the fly, and by integrating experimental data with the inorganic crystal structure database (ICSD), we can substantially enhance the accuracy in classifying the structural phases across ternary phase spaces. We have used this approach to identify a novel magnetic phase with enhanced magnetic anisotropy which is a candidate for rare-earth free permanent magnet. PMID:25220062

  14. Evidence for {100}<011> slip in ferropericlase in Earth's lower mantle from high-pressure/high-temperature experiments

    NASA Astrophysics Data System (ADS)

    Immoor, J.; Marquardt, H.; Miyagi, L.; Lin, F.; Speziale, S.; Merkel, S.; Buchen, J.; Kurnosov, A.; Liermann, H.-P.

    2018-05-01

    Seismic anisotropy in Earth's lowermost mantle, resulting from Crystallographic Preferred Orientation (CPO) of elastically anisotropic minerals, is among the most promising observables to map mantle flow patterns. A quantitative interpretation, however, is hampered by the limited understanding of CPO development in lower mantle minerals at simultaneously high pressures and temperatures. Here, we experimentally determine CPO formation in ferropericlase, one of the elastically most anisotropic deep mantle phases, at pressures of the lower mantle and temperatures of up to 1400 K using a novel experimental setup. Our data reveal a significant contribution of slip on {100} to ferropericlase CPO in the deep lower mantle, contradicting previous inferences based on experimental work at lower mantle pressures but room temperature. We use our results along with a geodynamic model to show that deformed ferropericlase produces strong shear wave anisotropy in the lowermost mantle, where horizontally polarized shear waves are faster than vertically polarized shear waves, consistent with seismic observations. We find that ferropericlase alone can produce the observed seismic shear wave splitting in D″ in regions of downwelling, which may be further enhanced by post-perovskite. Our model further shows that the interplay between ferropericlase (causing VSH > VSV) and bridgmanite (causing VSV > VSH) CPO can produce a more complex anisotropy patterns as observed in regions of upwelling at the margin of the African Large Low Shear Velocity Province.

  15. Understanding Geomorphological Processes on the Earth's Surface from Laboratory Experiments and the Role of Communities of Practice in Generating Reusable Data

    NASA Astrophysics Data System (ADS)

    Hsu, L.

    2016-12-01

    Geomorphological processes move masses of sediment across the face of the Earth, from mountain tops to hillslopes, rivers, flood plains, and coastlines, on a range of temporal and spatial scales that span many orders of magnitude. These processes, sometimes spanning millennia and sometimes occurring catastrophically, affect human communities that live on and near these surface landforms. Experiments conveniently scale these processes to time and space that can be observed and measured in the laboratory. As a result, the research community has produced remarkable experimental datasets for processes such as particle transport, hillslope erosion, channel migration, and coastline evolution. These datasets build a collection that quantifies a wide range of environmental processes and contributes to hazards mitigation and the understanding of long-term effects of climate and tectonics on landscape evolution. However, technology and data acquisition rates are outgrowing capabilities for storing, maintaining, and serving the data. Solutions that improve preservation, reuse, and attribution of geomorphological data from unique experimental set-ups are germinating at different research centers. These solutions allow the cross-disciplinary data integration that is often necessary to achieving a mechanistic and holistic understanding of the processes that shape the Earth's surface. Communities of practice such as the Sediment Experimentalist Network (SEN) and the U.S. Geological Survey's Community for Data Integration (USGS CDI) play a critical role in effectively facilitating information exchange about tools, methods, and results that accelerate experimental success. Through community interactions and a culture change to generate data more fit for reuse, broad challenges in reproducibility, scaling, and integration may be addressed, leading to more rapid progress in Earth surface process research.

  16. Scaling of the critical slip distance for seismic faulting with shear strain in fault zones

    Marone, Chris; Kilgore, Brian D.

    1993-01-01

    THEORETICAL and experimentally based laws for seismic faulting contain a critical slip distance1-5, Dc, which is the slip over which strength breaks down during earthquake nucleation. On an earthquake-generating fault, this distance plays a key role in determining the rupture nucleation dimension6, the amount of premonitory and post-seismic slip7-10, and the maximum seismic ground acceleration1,11. In laboratory friction experiments, Dc has been related to the size of surface contact junctions2,5,12; thus, the discrepancy between laboratory measurements of Dc (??? 10-5 m) and values obtained from modelling earthquakes (??? 10-2 m) has been attributed to differences in roughness between laboratory surfaces and natural faults5. This interpretation predicts a dependence of Dc on the particle size of fault gouge 2 (breccia and wear material) but not on shear strain. Here we present experimental results showing that Dc scales with shear strain in simulated fault gouge. Our data suggest a new physical interpretation for the critical slip distance, in which Dc is controlled by the thickness of the zone of localized shear strain. As gouge zones of mature faults are commonly 102-103 m thick13-17, whereas laboratory gouge layers are 1-10 mm thick, our data offer an alternative interpretation of the discrepancy between laboratory and field-based estimates of Dc.

  17. The Gravity Probe B `Niobium bird' experiment: Verifying the data reduction scheme for estimating the relativistic precession of Earth-orbiting gyroscopes

    NASA Technical Reports Server (NTRS)

    Uemaatsu, Hirohiko; Parkinson, Bradford W.; Lockhart, James M.; Muhlfelder, Barry

    1993-01-01

    Gravity Probe B (GP-B) is a relatively gyroscope experiment begun at Stanford University in 1960 and supported by NASA since 1963. This experiment will check, for the first time, the relativistic precession of an Earth-orbiting gyroscope that was predicted by Einstein's General Theory of Relativity, to an accuracy of 1 milliarcsecond per year or better. A drag-free satellite will carry four gyroscopes in a polar orbit to observe their relativistic precession. The primary sensor for measuring the direction of gyroscope spin axis is the SQUID (superconducting quantum interference device) magnetometer. The data reduction scheme designed for the GP-B program processes the signal from the SQUID magnetometer and estimates the relativistic precession rates. We formulated the data reduction scheme and designed the Niobium bird experiment to verify the performance of the data reduction scheme experimentally with an actual SQUID magnetometer within the test loop. This paper reports the results from the first phase of the Niobium bird experiment, which used a commercially available SQUID magnetometer as its primary sensor, and adresses the issues they raised. The first phase resulted in a large, temperature-dependent bias drift in the insensitive design and a temperature regulation scheme.

  18. Seismic Velocity and Elastic Properties of Plate Boundary Faults

    NASA Astrophysics Data System (ADS)

    Jeppson, Tamara N.

    The elastic properties of fault zone rock at depth play a key role in rupture nucleation, propagation, and the magnitude of fault slip. Materials that lie within major plate boundary fault zones often have very different material properties than standard crustal rock values. In order to understand the mechanics of faulting at plate boundaries, we need to both measure these properties and understand how they govern the behavior of different types of faults. Mature fault zones tend to be identified in large-scale geophysical field studies as zones with low seismic velocity and/or electrical resistivity. These anomalous properties are related to two important mechanisms: (1) mechanical or diagenetic alteration of the rock materials and/or (2) pore fluid pressure and stress effects. However, in remotely-sensed and large-length-scale data it is difficult to determine which of these mechanisms are affecting the measured properties. The objective of this dissertation research is to characterize the seismic velocity and elastic properties of fault zone rocks at a range of scales, with a focus on understanding why the fault zone properties are different from those of the surrounding rock and the potential effects on earthquake rupture and fault slip. To do this I performed ultrasonic velocity experiments under elevated pressure conditions on drill core and outcrops samples from three plate boundary fault zones: the San Andreas Fault, California, USA; the Alpine Fault, South Island, New Zealand; and the Japan Trench megathrust, Japan. Additionally, I compared laboratory measurements to sonic log and large-scale seismic data to examine the scale-dependence of the measured properties. The results of this study provide the most comprehensive characterization of the seismic velocities and elastic properties of fault zone rocks currently available. My work shows that fault zone rocks at mature plate boundary faults tend to be significantly more compliant than surrounding crustal

  19. Row fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  20. Row fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  1. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  2. Perspective View, Garlock Fault

    2000-04-20

    California Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA shuttle Radar Topography Mission.

  3. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  4. Dynamic Fault Detection Chassis

    SciT

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primarymore » turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.« less

  5. Fault isolation techniques

    NASA Technical Reports Server (NTRS)

    Dumas, A.

    1981-01-01

    Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.

  6. Faults and Flows

    2014-10-20

    Lava flows of Daedalia Planum can be seen at the top and bottom portions of this image from NASA 2001 Mars Odyssey spacecraft. The ridge and linear depression in the central part of the image are part of Mangala Fossa, a fault bounded graben.

  7. Llnking the EarthScope Data Virtual Catalog to the GEON Portal

    NASA Astrophysics Data System (ADS)

    Lin, K.; Memon, A.; Baru, C.

    2008-12-01

    The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.

  8. The engine fuel system fault analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei

    2017-05-01

    For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.

  9. NASA Spacecraft Fault Management Workshop Results

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  10. Dislocation Processes and Frictional Stability of Faults

    NASA Astrophysics Data System (ADS)

    Toy, V. G.; Mitchell, T. M.; Druiventak, A.

    2011-12-01

    surfaces, at least at the slightly sub-seismic deformation rates of these experiments. Furthermore, once sliding initiated on the saw cut surface, an amorphous material was generated. We hypothesise that this could have been due to a breakdown of the crystal structure by a combination of cataclasis and generation of excessive dislocation densities. There would also have been a slight increase in temperature around the sliding surface during and after fault slip, which may have aided the focussing of dislocation processes around the sliding surface.

  11. Moving Heaven and Earth: Administrative Search and Selection Processes and the Experience of an African American Woman Senior Administrator

    ERIC Educational Resources Information Center

    Barnett-Johnson, Kim R.

    2009-01-01

    The purpose of this case/phenomenological study was to examine a collegiate administrative search and selection process and the experience of an African American woman who was selected to the position of chancellor. A case concerning the search process of a regional campus of Ivy Tech Community College of Indiana was identified and chosen.…

  12. A experiment on radio location of objects in the near-Earth space with VLBI in 2012

    NASA Astrophysics Data System (ADS)

    Nechaeva, M.; Antipenko, A.; Bezrukovs, V.; Bezrukov, D.; Dementjev, A.; Dugin, N.; Konovalenko, A.; Kulishenko, V.; Liu, X.; Nabatov, A.; Nesteruk, V.; Pupillo, G.; Reznichenko, A.; Salerno, E.; Shmeld, I.; Shulga, O.; Sybiryakova, Y.; Tikhomirov, Yu.; Tkachenko, A.; Volvach, A.; Yang, W.-J.

    An experiment on radar location of space debris objects using of the method of VLBI was carried out in April, 2012. The radar VLBI experiment consisted in irradiation of some space debris objects (4 rocket stages and 5 inactive satellites) with a signal of the transmitter with RT-70 in Evpatoria, Ukraine. Reflected signals were received by a complex of radio telescopes in the VLBI mode. The following VLBI stations took part in the observations: Ventspils (RT-32), Urumqi (RT-25), Medicina (RT-32) and Simeiz (RT-22). The experiment included measurements of the Doppler frequency shift and the delay for orbit refining, and measurements of the rotation period and sizes of objects by the amplitudes of output interferometer signals. The cross-correlation of VLBI-data is performed at a correlator NIRFI-4 of Radiophysical Research Institute (Nizhny Novgorod). Preliminary data processing resulted in the series of Doppler frequency shifts, which comprised the information on radial velocities of the objects. Some results of the experiment are presented.

  13. The role of thin, mechanical discontinuities on the propagation of reverse faults: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2016-04-01

    Fault-related folding kinematic models are widely used to explain accommodation of crustal shortening. These models, however, include simplifications, such as the assumption of constant growth rate of faults. This value sometimes is not constant in isotropic materials, and even more variable if one considers naturally anisotropic geological systems. , This means that these simplifications could lead to incorrect interpretations of the reality. In this study, we use analogue models to evaluate how thin, mechanical discontinuities, such as beddings or thin weak layers, influence the propagation of reverse faults and related folds. The experiments are performed with two different settings to simulate initially-blind master faults dipping at 30° and 45°. The 30° dip represents one of the Andersonian conjugate fault, and 45° dip is very frequent in positive reactivation of normal faults. The experimental apparatus consists of a clay layer placed above two plates: one plate, the footwall, is fixed; the other one, the hanging wall, is mobile. Motor-controlled sliding of the hanging wall plate along an inclined plane reproduces the reverse fault movement. We run thirty-six experiments: eighteen with dip of 30° and eighteen with dip of 45°. For each dip-angle setting, we initially run isotropic experiments that serve as a reference. Then, we run the other experiments with one or two discontinuities (horizontal precuts performed into the clay layer). We monitored the experiments collecting side photographs every 1.0 mm of displacement of the master fault. These images have been analyzed through PIVlab software, a tool based on the Digital Image Correlation method. With the "displacement field analysis" (one of the PIVlab tools) we evaluated, the variation of the trishear zone shape and how the master-fault tip and newly-formed faults propagate into the clay medium. With the "strain distribution analysis", we observed the amount of the on-fault and off-fault deformation

  14. Thermal Orbital Environmental Parameter Study on the Propulsive Small Expendable Deployer System (ProSEDS) Using Earth Radiation Budget Experiment (ERBE) Data

    NASA Technical Reports Server (NTRS)

    Sharp, John R.; McConnaughey, Paul K. (Technical Monitor)

    2002-01-01

    The natural thermal environmental parameters used on the Space Station Program (SSP 30425) were generated by the Space Environmental Effects Branch at NASA's Marshall Space Flight Center (MSFC) utilizing extensive data from the Earth Radiation Budget Experiment (ERBE), a series of satellites which measured low earth orbit (LEO) albedo and outgoing long-wave radiation. Later, this temporal data was presented as a function of averaging times and orbital inclination for use by thermal engineers in NASA Technical Memorandum TM 4527. The data was not presented in a fashion readily usable by thermal engineering modeling tools and required knowledge of the thermal time constants and infrared versus solar spectrum sensitivity of the hardware being analyzed to be used properly. Another TM was recently issued as a guideline for utilizing these environments (NASA/TM-2001-211221) with more insight into the utilization by thermal analysts. This paper gives a top-level overview of the environmental parameters presented in the TM and a study of the effects of implementing these environments on an ongoing MSFC project, the Propulsive Small Expendable Deployer System (ProSEDS), compared to conventional orbital parameters that had been historically used.

  15. Prebiotic Synthesis of Methionine and Other Sulfur-Containing Organic Compounds on the Primitive Earth: A Contemporary Reassessment Based on an Unpublished 1958 Stanley Miller Experiment

    NASA Technical Reports Server (NTRS)

    Parker, Eric T.; Cleaves, H. James; Callahan, Michael P.; Dworkin, Jason P.; Glavin, Daniel P.; Lazcano, Antonio

    2010-01-01

    Original extracts from an unpublished 1958 experiment conducted by the late Stanley L. Miller were recently found and analyzed using modern state-of-the-art analytical methods. The extracts were produced by the action of an electric discharge on a mixture of methane (CH4), hydrogen sulfide (H2S), ammonia (NH3), and carbon dioxide (CO2). Racemic methionine was farmed in significant yields, together with other sulfur-bearing organic compounds. The formation of methionine and other compounds from a model prebiotic atmosphere that contained H2S suggests that this type of synthesis is robust under reducing conditions, which may have existed either in the global primitive atmosphere or in localized volcanic environments on the early Earth. The presence of a wide array of sulfur-containing organic compounds produced by the decomposition of methionine and cysteine indicates that in addition to abiotic synthetic processes, degradation of organic compounds on the primordial Earth could have been important in diversifying the inventory of molecules of biochemical significance not readily formed from other abiotic reactions, or derived from extraterrestrial delivery.

  16. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.

  17. Internal Structure of Taiwan Chelungpu Fault Zone Gouges

    NASA Astrophysics Data System (ADS)

    Song, Y.; Song, S.; Tang, M.; Chen, F.; Chen, Y.

    2005-12-01

    Gouge formation is found to exist in brittle faults at all scale (1). This fine-grain gouge is thought to control earthquake instability. And thus investigating the gouge textures and compositions is very important to an understanding of the earthquake process. Employing the transmission electron microscope (TEM) and a new transmission X-ray microscope (TXM), we study the internal structure of fault zone gouges from the cores of the Taiwan Chelungpu-fault Drilling Project (TCDP), which drilled in the fault zone of 1999 Chi-Chi earthquake. This X-ray microscope have installed at beamline BL01B of the Taiwan Light Source, National Synchrotron Radiation Research Center (NSRRC). It provides 2D imaging and 3D tomography at energy 8-11 keV with a spatial resolution of 25-60 nm, and is equipped with the Zernike-phase contrast capability for imaging light materials. In this work, we show the measurements of gouge texture, particle size distribution and 3D structure of the ultracataclasite in fault gouges within 12 cm about 1111.29 m depth. These characterizations in transition from the fault core to damage zone are related to the comminuting and the fracture energy in the earthquake faulting. The TXM data recently shows the particle size distributions of the ultracataclasite are between 150 nm and 900 nm in diameter. We will keep analyzing the characterization of particle size distribution, porosity and 3D structure of the fault zone gouges in transition from the fault core to damage zone to realize the comminuting and fracture surface energy in the earthquake faulting(2-5).The results may ascertain the implication of the nucleation, growth, transition, structure and permeability of the fault zones(6-8). Furthermore, it may be possible to infer the mechanism of faulting, the physical and chemical property of the fault, and the nucleation of the earthquake. References 1) B. Wilson, T. Dewerw, Z. Reches and J. Brune, Nature, 434 (2005) 749. 2) S. E. Schulz and J. P. Evans

  18. Ground Surface Deformation in Unconsolidated Sediments Caused by Bedrock Fault Movements: Dip-Slip and Strike-Slip Fault Model Test and Field Survey

    NASA Astrophysics Data System (ADS)

    Ueta, K.; Tani, K.

    2001-12-01

    Sandbox experiments were performed to investigate ground surface deformation in unconsolidated sediments caused by dip-slip and strike-slip motion on bedrock faults. A 332.5 cm long, 200 cm high, and 40 cm wide sandbox was used in a dip-slip fault model test. In the strike-slip fault test, a 600 cm long, 250 cm wide, and 60 cm high sandbox and a 170 cm long, 25 cm wide, 15 cm high sandbox were used. Computerized X-ray tomography applied to the sandbox experiments made it possible to analyze the kinematic evolution, as well as the three-dimensional geometry, of the faults. The fault type, fault dip, fault displacement, thickness and density of sandpack and grain size of the sand were varied for different experiments. Field survey of active faults in Japan and California were also made to investigate the deformation of unconsolidated sediments overlying bedrock faults. A comparison of the experimental results with natural cases of active faults reveals the following: (1) In the case of dip-slip faulting, the shear bands are not shown as one linear plane but as en echelon pattern. Thicker and finer unconsolidated sediments produce more shear bands and clearer en echelon shear band patterns. (2) In the case of left-lateral strike-slip faulting, the deformation of the sand pack with increasing basement displacement is observed as follows. a) In three dimensions, the right-stepping shears that have a "cirque" / "shell" / "ship body" shape develop on both sides of the basement fault. The shears on one side of the basement fault join those on the other side, resulting in helicoidal shaped shear surfaces. Shears reach the surface of the sand near or above the basement fault and en echelon Riedel shears are observed at the surface of the sand. b) Right-stepping pressure ridges develop within the zone defined by the Riedel shears. c) Lower-angle shears generally branch off from the first Riedel shears. d) Right-stepping helicoidal shaped lower-angle shears offset Riedel

  19. Discover Earth

    NASA Technical Reports Server (NTRS)

    Steele, Colleen

    1998-01-01

    Discover Earth is a NASA-sponsored project for teachers of grades 5-12, designed to: (1) enhance understanding of the Earth as an integrated system; (2) enhance the interdisciplinary approach to science instruction; and (3) provide classroom materials that focus on those goals. Discover Earth is conducted by the Institute for Global Environmental Strategies in collaboration with Dr. Eric Barron, Director, Earth System Science Center, The Pennsylvania State University; and Dr. Robert Hudson, Chair, the Department of Meteorology, University of Maryland at College Park. The enclosed materials: (1) represent only part of the Discover Earth materials; (2) were developed by classroom teachers who are participating in the Discover Earth project; (3) utilize an investigative approach and on-line data; and (4) can be effectively adjusted to classrooms with greater/without technology access. The Discover Earth classroom materials focus on the Earth system and key issues of global climate change including topics such as the greenhouse effect, clouds and Earth's radiation balance, surface hydrology and land cover, and volcanoes and climate change. All the materials developed to date are available on line at (http://www.strategies.org) You are encouraged to submit comments and recommendations about these materials to the Discover Earth project manager, contact information is listed below. You are welcome to duplicate all these materials.

  20. Fault healing and earthquake spectra from stick slip sequences in the laboratory and on active faults

    NASA Astrophysics Data System (ADS)

    McLaskey, G. C.; Glaser, S. D.; Thomas, A.; Burgmann, R.

    2011-12-01

    Repeating earthquake sequences (RES) are thought to occur on isolated patches of a fault that fail in repeated stick-slip fashion. RES enable researchers to study the effect of variations in earthquake recurrence time and the relationship between fault healing and earthquake generation. Fault healing is thought to be the physical process responsible for the 'state' variable in widely used rate- and state-dependent friction equations. We analyze RES created in laboratory stick slip experiments on a direct shear apparatus instrumented with an array of very high frequency (1KHz - 1MHz) displacement sensors. Tests are conducted on the model material polymethylmethacrylate (PMMA). While frictional properties of this glassy polymer can be characterized with the rate- and state- dependent friction laws, the rate of healing in PMMA is higher than room temperature rock. Our experiments show that in addition to a modest increase in fault strength and stress drop with increasing healing time, there are distinct spectral changes in the recorded laboratory earthquakes. Using the impact of a tiny sphere on the surface of the test specimen as a known source calibration function, we are able to remove the instrument and apparatus response from recorded signals so that the source spectrum of the laboratory earthquakes can be accurately estimated. The rupture of a fault that was allowed to heal produces a laboratory earthquake with increased high frequency content compared to one produced by a fault which has had less time to heal. These laboratory results are supported by observations of RES on the Calaveras and San Andreas faults, which show similar spectral changes when recurrence time is perturbed by a nearby large earthquake. Healing is typically attributed to a creep-like relaxation of the material which causes the true area of contact of interacting asperity populations to increase with time in a quasi-logarithmic way. The increase in high frequency seismicity shown here

  1. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  2. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 2: Analysis of the ERBE integrating sphere ground calibration

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1987-01-01

    An explicit solution of the spectral radiance leaving an arbitrary point on the wall of a spherical cavity with diffuse reflectivity is obtained. The solution is applicable to spheres with an arbitrary number of openings of any size and shape, an arbitrary number of light sources with possible non-diffuse characteristics, a non-uniform sphere wall temperature distribution, non-uniform and non-diffuse sphere wall emissivity and non-uniform but diffuse sphere wall spectral reflectivity. A general measurement equation describing the output of a sensor with a given field of view, angular and spectral response measuring the sphere output is obtained. The results are applied to the Earth Radiation Budget Experiment (ERBE) integrating sphere. The sphere wall radiance uniformity, loading effects and non-uniform wall temperature effects are investigated. It is shown that using appropriate interpretation and processing, a high-accuracy short-wave calibration of the ERBE sensors can be achieved.

  3. Implementation of an experimental fault-tolerant memory system

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Mccarthy, C. E.

    1976-01-01

    The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.

  4. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  5. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  6. Mechanistic study of lead desorption during the leaching process of ion-absorbed rare earths: pH effect and the column experiment

    NASA Astrophysics Data System (ADS)

    Xue, Q.; Tang, J., Sr.; Chen, H.

    2017-12-01

    High concentrations of ammonium sulfate, often used in the in-situ mining process, can result in a decrease of pH in the environment and dissolution of rare earth metals. Ammonium sulfate can also cause desorption of toxic heavy metals, leading to environmental and human health implications. In this study, the desorption behavior and fraction changes of lead in the ion-absorbed rare earth ore were studied using batch desorption experiments and column leaching tests. Results from batch desorption experiments showed that the desorption process of lead included fast and slow stages, and followed an Elovich model well. The desorption rate and the proportion of lead content in the solution to the total lead in the soil were observed to increase with a decrease in the initial pH of the ammonium sulfate solution. The lead in soil included an acid extractable fraction, reducible fraction, oxidizable fraction, and a residual fraction, with the predominant fractions being the reducible and acid extractable fractions. 96% of the extractable fraction in soil were desorbed into solution at pH=3.0, and the content of the reducible fraction was observed to initially increase (when pH>4.0) and then decrease (when pH<4.0) with a decrease in pH. Column leaching tests indicated that the content of lead in the different fractions of soil followed the trend of reducible fraction > oxidizable fraction > acid extractable fraction > residual fraction after the simulating leaching mining process. The change in pH was also found to have a larger influence on the acid extractable and reducible fractions than the other two fractions. The proportion of the extractable fraction being leached was ca. 86%, and the reducible fraction was enriched along the migration direction of the leaching liquid. These results suggest that certain lead fractions may desorb again and contaminate the environment via acid rain, which provides significant information for environmental assessment and remediation

  7. First Results from Colorado Student Space Weather Experiment (CSSWE): Differential Flux Measurements of Energetic Particles in a Highly Inclined Low Earth Orbit

    NASA Astrophysics Data System (ADS)

    Li, X.; Palo, S. E.; Kohnert, R.; Gerhardt, D.; Blum, L. W.; Schiller, Q.; Turner, D. L.; Tu, W.

    2012-12-01

    The Colorado Student Space Weather Experiment (CSSWE) is a 3-unit (10cm x 10cm x 30cm) CubeSat mission funded by the National Science Foundation, scheduled for launch into a low-Earth, polar orbit after August 14th, 2012 as a secondary payload under NASA's Educational Launch of Nanosatellites (ELaNa) program. The science objectives of CSSWE are to investigate the relationship of the location, magnitude, and frequency of solar flares to the timing, duration, and energy spectrum of solar energetic particles (SEP) reaching Earth, and to determine the precipitation loss and the evolution of the energy spectrum of radiation belt electrons. CSSWE contains a single science payload, the Relativistic Electron and Proton Telescope integrated little experiment (REPTile), which is a miniaturization of the Relativistic Electron and Proton Telescope (REPT) built at the Laboratory for Atmospheric and Space Physics (LASP). The REPT instrument will fly onboard the NASA/Radiation Belt Storm Probes (RBSP) mission, which consists of two identical spacecraft scheduled to launch after August 23rd, 2012 that will go through the heart of the radiation belts in a low inclination orbit. CSSWE's REPTile is designed to measure the directional differential flux of protons ranging from 10 to 40 MeV and electrons from 0.5 to >3 MeV. Such differential flux measurements have significant science value, and a number of engineering challenges were overcome to enable these clean measurements to be made under the mass and power limits of a CubeSat. The CSSWE is an ideal class project, providing training for the next generation of engineers and scientists over the full life-cycle of a satellite project. We will report the first results from this exciting mission.

  8. ISS COLUMBUS laboratory experiment `GeoFlow I and II' -fluid physics research in microgravity environment to study convection phenomena inside deep Earth and mantle

    NASA Astrophysics Data System (ADS)

    Futterer, Birgit; Egbers, Christoph; Chossat, Pascal; Hollerbach, Rainer; Breuer, Doris; Feudel, Fred; Mutabazi, Innocent; Tuckerman, Laurette

    Overall driving mechanism of flow in inner Earth is convection in its gravitational buoyancy field. A lot of effort has been involved in theoretical prediction and numerical simulation of both the geodynamo, which is maintained by convection, and mantle convection, which is the main cause for plate tectonics. Especially resolution of convective patterns and heat transfer mechanisms has been in focus to reach the real, highly turbulent conditions inside Earth. To study specific phenomena experimentally different approaches has been observed, against the background of magneto-hydrodynamic but also on the pure hydrodynamic physics of fluids. With the experiment `GeoFlow' (Geophysical Flow Simulation) instability and transition of convection in spherical shells under the influence of central-symmetry buoyancy force field are traced for a wide range of rotation regimes within the limits between non-rotating and rapid rotating spheres. The special set-up of high voltage potential between inner and outer sphere and use of a dielectric fluid as working fluid induce an electro-hydrodynamic force, which is comparable to gravitational buoyancy force inside Earth. To reduce overall gravity in a laboratory this technique requires microgravity conditions. The `GeoFlow I' experiment was accomplished on International Space Station's module COLUM-BUS inside Fluid Science Laboratory FSL und supported by EADS Astrium, Friedrichshafen, User Support und Operations Centre E-USOC in Madrid, Microgravity Advanced Research and Support Centre MARS in Naples, as well as COLUMBUS Control Center COL-CC Munich. Running from August 2008 until January 2009 it delivered 100.000 images from FSL's optical diagnostics module; here more precisely the Wollaston shearing interferometry was used. Here we present the experimental alignment with numerical prediction for the non-rotating and rapid rotation case. The non-rotating case is characterized by a co-existence of several stationary supercritical

  9. Permeability Evolution With Shearing of Simulated Faults in Unconventional Shale Reservoirs

    NASA Astrophysics Data System (ADS)

    Wu, W.; Gensterblum, Y.; Reece, J. S.; Zoback, M. D.

    2016-12-01

    Horizontal drilling and multi-stage hydraulic fracturing can lead to fault reactivation, a process thought to influence production from extremely low-permeability unconventional reservoir. A fundamental understanding of permeability changes with shear could be helpful for optimizing reservoir stimulation strategies. We examined the effects of confining pressure and frictional sliding on fault permeability in Eagle Ford shale samples. We performed shear-flow experiments in a triaxial apparatus on four shale samples: (1) clay-rich sample with sawcut fault, (2) calcite-rich sample with sawcut fault, (3) clay-rich sample with natural fault, and (4) calcite-rich sample with natural fault. We used pressure pulse-decay and steady-state flow techniques to measure fault permeability. Initial pore and confining pressures are set to 2.5 MPa and 5.0 MPa, respectively. To investigate the influence of confining pressure on fault permeability, we incrementally raised and lowered the confining pressure and measure permeability at different effective stresses. To examine the effect of frictional sliding on fault permeability, we slide the samples four times at a constant shear displacement rate of 0.043 mm/min for 10 minutes each and measure fault permeability before and after frictional sliding. We used a 3D Laser Scanner to image fault surface topography before and after the experiment. Our results show that frictional sliding can enhance fault permeability at low confining pressures (e.g., ≥5.0 MPa) and reduce fault permeability at high confining pressures (e.g., ≥7.5 MPa). The permeability of sawcut faults almost fully recovers when confining pressure returns to the initial value, and increases with sliding due to asperity damage and subsequent dilation at low confining pressures. In contrast, the permeability of natural faults does not fully recover. It initially increases with sliding, but then decreases with further sliding most likely due to fault gouge blocking fluid

  10. Estimating Fault Friction From Seismic Signals in the Laboratory

    SciT

    Rouet-Leduc, Bertrand; Hulbert, Claudia; Bolton, David C.

    Nearly all aspects of earthquake rupture are controlled by the friction along the fault that progressively increases with tectonic forcing but in general cannot be directly measured. We show that fault friction can be determined at any time, from the continuous seismic signal. In a classic laboratory experiment of repeating earthquakes, we find that the seismic signal follows a specific pattern with respect to fault friction, allowing us to determine the fault's position within its failure cycle. Using machine learning, we show that instantaneous statistical characteristics of the seismic signal are a fingerprint of the fault zone shear stress andmore » frictional state. Further analysis of this fingerprint leads to a simple equation of state quantitatively relating the seismic signal power and the friction on the fault. Finally, these results show that fault zone frictional characteristics and the state of stress in the surroundings of the fault can be inferred from seismic waves, at least in the laboratory.« less

  11. An earthquake mechanism based on rapid sealing of faults

    Blanpied, M.L.; Lockner, D.A.; Byerlee, J.D.

    1992-01-01

    RECENT seismological, heat flow and stress measurements in active fault zones such as the San Andreas have led to the suggestion1,2 that such zones can be relatively weak. One explanation for this may be the presence of overpressured fluids along the fault3-5, which would reduce the shear stress required for sliding by partially 'floating' the rock. Although several mechanisms have been proposed for overpressurizing fault fluids3,4,6,7, we recall that 'pressure seals' are known to form in both sedimentary8 and igneous9 rocks by the redistribution of materials in solution; the formation of such a seal along the boundaries of a fault will prevent the communication of fluids between the porous, deforming fault zone and the surrounding country rock. Compaction of fault gouge, under hydrostatic loading and/or during shear, elevates pore pressure in the sealed fault and allows sliding at low shear stress. We report the results of laboratory sliding experiments on granite, which demonstrate that the sliding resistance of faults can be significantly decreased by sealing and compaction. The weakening that results from shear-induced compaction can be rapid, and may provide an instability mechanism for earthquakes.

  12. Estimating Fault Friction From Seismic Signals in the Laboratory

    DOE PAGES

    Rouet-Leduc, Bertrand; Hulbert, Claudia; Bolton, David C.; ...

    2018-01-29

    Nearly all aspects of earthquake rupture are controlled by the friction along the fault that progressively increases with tectonic forcing but in general cannot be directly measured. We show that fault friction can be determined at any time, from the continuous seismic signal. In a classic laboratory experiment of repeating earthquakes, we find that the seismic signal follows a specific pattern with respect to fault friction, allowing us to determine the fault's position within its failure cycle. Using machine learning, we show that instantaneous statistical characteristics of the seismic signal are a fingerprint of the fault zone shear stress andmore » frictional state. Further analysis of this fingerprint leads to a simple equation of state quantitatively relating the seismic signal power and the friction on the fault. Finally, these results show that fault zone frictional characteristics and the state of stress in the surroundings of the fault can be inferred from seismic waves, at least in the laboratory.« less

  13. Real-Time Fault Classification for Plasma Processes

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2011-01-01

    Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001

  14. Automatic Fault Characterization via Abnormality-Enhanced Classification

    SciT

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  15. Five biomedical experiments flown in an Earth orbiting laboratory: Lessons learned from developing these experiments on the first international microgravity mission from concept to landing

    NASA Technical Reports Server (NTRS)

    Winget, C. M.; Lashbrook, J. J.; Callahan, P. X.; Schaefer, R. L.

    1993-01-01

    There are numerous problems associated with accommodating complex biological systems in microgravity in the flexible laboratory systems installed in the Orbiter cargo bay. This presentation will focus upon some of the lessons learned along the way from the University laboratory to the IML-1 Microgravity Laboratory. The First International Microgravity Laboratory (IML-1) mission contained a large number of specimens, including: 72 million nematodes, US-1; 3 billion yeast cells, US-2; 32 million mouse limb-bud cells, US-3; and 540 oat seeds (96 planted), FOTRAN. All five of the experiments had to undergo significant redevelopment effort in order to allow the investigator's ideas and objectives to be accommodated within the constraints of the IML-1 mission. Each of these experiments were proposed as unique entities rather than part of the mission, and many procedures had to be modified from the laboratory practice to meet IML-1 constraints. After a proposal is accepted by NASA for definition, an interactive process is begun between the Principal Investigator and the developer to ensure a maximum science return. The success of the five SLSPO-managed experiments was the result of successful completion of all preflight biological testing and hardware verification finalized at the KSC Life Sciences Support Facility housed in Hangar L. The ESTEC Biorack facility housed three U.S. experiments (US-1, US-2, and US-3). The U.S. Gravitational Plant Physiology Facility housed GTHRES and FOTRAN. The IML-1 mission (launched from KSC on 22 Jan. 1992, and landed at Dryden Flight Research Facility on 30 Jan. 1992) was an outstanding success--close to 100 percent of the prelaunch anticipated science return was achieved and, in some cases, greater than 100 percent was achieved (because of an extra mission day).

  16. Fault-Tolerant Control of ANPC Three-Level Inverter Based on Order-Reduction Optimal Control Strategy under Multi-Device Open-Circuit Fault.

    PubMed