GOCE gravity field simulation based on actual mission scenario
NASA Astrophysics Data System (ADS)
Pail, R.; Goiginger, H.; Mayrhofer, R.; Höck, E.; Schuh, W.-D.; Brockmann, J. M.; Krasbutter, I.; Fecher, T.; Gruber, T.
2009-04-01
In the framework of the ESA-funded project "GOCE High-level Processing Facility", an operational hardware and software system for the scientific processing (Level 1B to Level 2) of GOCE data has been set up by the European GOCE Gravity Consortium EGG-C. One key component of this software system is the processing of a spherical harmonic Earth's gravity field model and the corresponding full variance-covariance matrix from the precise GOCE orbit and calibrated and corrected satellite gravity gradiometry (SGG) data. In the framework of the time-wise approach a combination of several processing strategies for the optimum exploitation of the information content of the GOCE data has been set up: The Quick-Look Gravity Field Analysis is applied to derive a fast diagnosis of the GOCE system performance and to monitor the quality of the input data. In the Core Solver processing a rigorous high-precision solution of the very large normal equation systems is derived by applying parallel processing techniques on a PC cluster. Before the availability of real GOCE data, by means of a realistic numerical case study, which is based on the actual GOCE orbit and mission scenario and simulation data stemming from the most recent ESA end-to-end simulation, the expected GOCE gravity field performance is evaluated. Results from this simulation as well as recently developed features of the software system are presented. Additionally some aspects on data combination with complementary data sources are addressed.
Evaluation of GOCE-based Global Geoid Models in Finnish Territory
NASA Astrophysics Data System (ADS)
Saari, Timo; Bilker-Koivula, Mirjam
2015-04-01
The gravity satellite mission GOCE made its final observations in the fall of 2013. By then it had exceeded its expected lifespan of one year with more than three additional years. Thus, the mission collected more data from the Earth's gravitational field than expected, and more comprehensive global geoid models have been derived ever since. The GOCE High-level Processing Facility (HPF) by ESA has published GOCE global gravity field models annually. We compared all of the 12 HPF-models as well as 3 additional GOCE, 11 GRACE and 6 combined GOCE+GRACE models with GPS-levelling data and gravity observations in Finland. The most accurate models were compared against high resolution global geoid models EGM96 and EGM2008. The models were evaluated up to three different degrees and order: 150 (the common maximum for the GRACE models), 240 (the common maximum for the GOCE models) and maximum. When coefficients up to degree and order 150 are used, the results of the GOCE models are comparable with the results of the latest GRACE models. Generally, all of the latest GOCE and GOCE+GRACE models give standard deviations of the height anomaly differences of around 15 cm and of gravity anomaly differences of around 10 mgal over Finland. The best solutions were not always achieved with the highest maximum degree and order of the satellite gravity field models, since the highest coefficients (above 240) may be less accurately determined. Over Finland, the latest GOCE and GOCE+GRACE models give similar results as the high resolution models EGM96 and EGM2008 when coefficients up to degree and order 240 are used. This is mainly due to the high resolution terrestrial data available in the area of Finland, which was used in the high resolution models.
First look at GOCE-derived thermosphere density and wind measurements
NASA Astrophysics Data System (ADS)
Doornbos, E.; Bruinsma, S. L.; Koppenwallner, G.; Fritsche, B.; Visser, P. N.; van den IJssel, J.; Kern, M.
2011-12-01
Accelerometers carried by low-Earth orbiters such as GOCE have the ability to provide highly detailed data on thermospheric density and winds. Like its predecessor missions, CHAMP and GRACE, GOCE has not been specifically designed for studies of the thermosphere. Nevertheless, their application in this domain has resulted in density and wind data sets containing information at unprecedented levels of coverage and precision, resulting in many scientific papers. The orbit of GOCE is unique. It is nearly sun-synchronous, and due to its drag free control system, its altitude can be kept fixed for several years, at about 270 km. This leads to sampling characteristics that are ideal for studying the effect of variations in solar and magnetospheric energy input on the thermosphere density and wind. Besides the presentation of the first GOCE-derived density and wind measurements, this poster will describe the GOCE data processing approach, which differs from that of the earlier missions in the special consideration required for both the handling of the thruster accelerations and the aerodynamic modelling.
Goce and Its Role in Combined Global High Resolution Gravity Field Determination
NASA Astrophysics Data System (ADS)
Fecher, T.; Pail, R.; Gruber, T.
2013-12-01
Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.
NASA Astrophysics Data System (ADS)
Rummel, R.
2012-12-01
With the gravity field and steady-state ocean circulation explorer (GOCE) (preferably combined with the gravity field and climate experiment (GRACE)) a new generation of geoid models will become available for use in height determination. These models will be globally consistent, accurate (
What have we gained from GOCE, and what is still to be expected?
NASA Astrophysics Data System (ADS)
Pail, R.; Fecher, T.; Mayer-Gürr, T.; Rieser, D.; Schuh, W. D.; Brockmann, J. M.; Jäggi, A.; Höck, E.
2012-04-01
So far three releases of GOCE-only gravity field models applying the time-wise method have been computed in the frame of the ESA project "GOCE High-Level Processing Facility". They have been complemented by satellite-only combination models generated by the GOCO ("Gravity Observation Combination") consortium. Due to the fact that the processing strategy has remained practically unchanged for all releases, the continuous improvement by including more and more GOCE data can be analyzed. One of the basic features of the time-wise gravity field models (GOCE_TIM) is the fact, that no gravity field prior information is used, neither as reference model nor for constraining the solution. Therefore, the gain of knowledge on the Earth's gravity field derived purely from the GOCE mission can be evaluated. The idea of the complementary GOCO models is to improve the long to medium wavelengths of the gravity field solutions, which are rather weakly defined by GOCE orbit information, by inclusion of additional data from satellite sources such as GRACE, CHAMP and SLR, taking benefit from the individual strengths and favourable features of the individual data types. In this contribution, we will review which impact GOCE has achieved so far on global and regional gravity field modelling. Besides the gravity field modelling itself, the contributions of GOCE to several application fields, such as the computation of geodetic mean dynamic topography (MDT), and also for geophysical modelling of the lithosphere, will be highlighted. Special emphasis shall be given to the discussion to what extent the full variance-covariance information, representing very realistic error estimates of the gravity field accuracy, can be utilized. Finally, also a GOCE performance prediction shall be given. After the end of the extended mission phase by December 2012, currently several mission scenarios are discussed, such as either extending the mission period further as long as possible at the same altitude, or lowering the satellite by 10-20 km for a shorter period. Based on numerical simulation studies the pros and cons of several scenarios regarding the achievable gravity field accuracy shall be evaluated and quantified.
Improvement of the GPS/A system for extensive observation along subduction zones around Japan
NASA Astrophysics Data System (ADS)
Fujimoto, H.; Kido, M.; Tadokoro, K.; Sato, M.; Ishikawa, T.; Asada, A.; Mochizuki, M.
2011-12-01
Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.
NASA Astrophysics Data System (ADS)
Godah, Walyeldeen; Szelachowska, Malgorzata; Krynski, Jan
2014-06-01
The GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) has significantly upgraded the knowledge on the Earth gravity field. In this contribution the accuracy of height anomalies determined from Global Geopotential Models (GGMs) based on approximately 27 months GOCE satellite gravity gradiometry (SGG) data have been assessed over Poland using three sets of precise GNSS/levelling data. The fits of height anomalies obtained from 4th release GOCE-based GGMs to GNSS/levelling data were discussed and compared with the respective ones of 3rd release GOCE-based GGMs and the EGM08. Furthermore, two highly accurate gravimetric quasigeoid models were developed over the area of Poland using high resolution Faye gravity anomalies. In the first, the GOCE-based GGM was used as a reference geopotential model, and in the second - the EGM08. They were evaluated with GNSS/levelling data and their accuracy performance was assessed. The use of GOCE-based GGMs for recovering the long-wavelength gravity signal in gravimetric quasigeoid modelling was discussed. Misja GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) przyczyniła się do znacznego poprawienia znajomości pola siły ciężkości Ziemi. W artykule przedstawiono wyniki oszacowania dokładności anomalii wysokości, wyznaczonych z globalnych modeli geopotencjału opracowanych na podstawie blisko 27 miesięcy pomiarów z satelitarnej misji gradiometrycznej GOCE. Do oszacowania wykorzystano trzy zbiory dokładnych danych satelitarno-niwelacyjnych z obszaru Polski. Omówiono wyniki wpasowania wartości anomalii wysokości otrzymanych z czwartej wersji globalnych modeli geopotencjału wyznaczonych na podstawie danych misji GOCE do danych satelitarno-niwelacyjnych oraz porównano je z wynikami odpowiedniego wpasowania trzeciej wersji globalnych modeli geopotencjału otrzymanych z GOCE oraz z modelu EGM08. Ponadto, wykorzystując wysokorozdzielczy zbiór grawimetrycznych anomalii Faye'a, wyznaczono dla obszaru Polski dwa grawimetryczne modele quasigeoidy o wysokiej dokładności. W pierwszym przypadku jako model referencyjny użyto model utworzony na podstawie danych z misji GOCE, w drugim - model EGM08. Wygenerowane modele quasigeoidy porównano z danymi satelitarno-niwelacyjnymi oraz oszacowano ich dokładność. Omówiono przydatność otrzymanych na podstawie danych z misji GOCE globalnych modeli geopotencjału do odtworzenia długofalowego sygnału grawimetrycznego przy modelowaniu grawimetrycznej quasigeoidy.
NASA Astrophysics Data System (ADS)
Tscherning, Carl Christian; Arabelos, Dimitrios; Reguzzoni, Mirko
2013-04-01
The GOCE satellite measures gravity gradients which are filtered and transformed to gradients into an Earth-referenced frame by the GOCE High Level processing Facility. More than 80000000 data with 6 components are available from the period 2009-2011. IAG Arctic gravity was used north of 83 deg., while data at the Antarctic was not used due to bureaucratic restrictions by the data-holders. Subsets of the data have been used to produce gridded values at 10 km altitude of gravity anomalies and vertical gravity gradients in 20 deg. x 20 deg. blocks with 10' spacing. Various combinations and densities of data were used to obtain values in areas with known gravity anomalies. The (marginally) best choice was vertical gravity gradients selected with an approximately 0.125 deg spacing. Using Least-Squares Collocation, error-estimates were computed and compared to the difference between the GOCE-grids and grids derived from EGM2008 to deg. 512. In general a good agreement was found, however with some inconsistencies in certain areas. The computation time on a usual server with 24 processors was typically 100 minutes for a block with generally 40000 GOCE vertical gradients as input. The computations will be updated with new Wiener-filtered data in the near future.
NASA Astrophysics Data System (ADS)
He, Lin; Li, Jiancheng; Chu, Yonghai; Zhang, Tengxu
2017-04-01
National height reference systems have conventionally been linked to the coastal local mean sea level, observed at one tide gauge, such as the China national height datum 1985. Due to the effect of the local sea surface topography, the reference level surface of local datum is inconsistent with the global datum or other local datum. In order to unify or connect the local datum to the global height datum, it is necessary to obtain the zero-height geopotential value of local datum or the height offset with respect to the global datum. The GRACE and GOCE satellite mission are promising for purposes of unification of local vertical datums because they have brought a significant improvement in modeling of low-frequency or rather medium-frequency part of the Earth's static gravity field in the past ten years. The focus of this work is directed to the evaluation of most available Global Geopotential Models (GGMs) from GOCE and GRACE, both satellite only as well as combined ones. From the evaluation with the 649 GPS/Levelling benchmarks (BMs) in China, the GOCE/GRACE GGMs provide the accuracy at 42-52cm level, up to their max degree and order. The latest release 5 DIR, TIM GGMs improve the accuracies by 6-10cm compared to the release 1 models. The DIR_R1 is based on the fewer GOCE data performs equally well with the DIR_R4 and DIR_R5 model, this is attributed to the fact that during its development which used a priori information from EIGEN-51C. The zero-height geopotential value W0LVD for the China Local Vertical Datum (LVD) is 62636855.1606m2s-2 from the originally GOCE/GRACE GGMs. Taking into account the GPS/Levelling data contains the full spectral information, and the GOCE-only or GRACE-GOCE combined model are limited to the long wavelengths. To improve the accuracy of the GGMs, it is indispensable to account for the remaining signal above this maximum degree, known as the omission error of the GGM. The effect of GRACE/GOCE omission error is investigated by extending the models with the high-resolution gravity field model EGM2008. In China, the effect of the GRACE/GOCE GGMs omission error is at the decimeter level. The combined GGMs (up to 2160 degree and order) could provide an accuracy at 20cm level, which is better than that from EGM2008. Meanwhile, if an appropriate degree and order is chosen for the GOCE-only or GRACE-GOCE combined GGMs to connect with the EGM2008, the extended GGMs provide an accuracy at 16cm level. From the extended GGMs, the geopotential value W0LVD determined for the China local vertical datum is 62636853.4351 m2s-2 indicates a bias of about 2.5649 m2/s-2 compared to the conventional value of 62,636,856.0 m2s-2. This is support by National key research and development program No:2016YFB0501702. Keywords: Global Geopotential Models; GRACE; GOCE; GPS/Levelling; zero-height geopotential
The IfE Global Gravity Field Model Recovered from GOCE Orbit and Gradiometer Data
NASA Astrophysics Data System (ADS)
Wu, Hu; Muiller, Jurgen; Brieden, Phillip
2015-03-01
An independent global gravity field model is computed from the GOCE orbit and gradiometer data using our own IfE software. We analysed the same data period that were considered for the first released GOCE models. The Acceleration Approach is applied to process the orbit data. The gravity gradients are processed in the framework of the remove-restore technique by which the low-frequency noise of the original gradients are removed. For the combined solution, the normal equations are summed by the Variance Component Estimation Approach. The result in terms of accumulated geoid height error calculated from the coefficient difference w.r.t. EGM2008 is about 11 cm at D/O 200, which corresponds to the accuracy level of the first released TIM and DIR solutions. This indicates that our IfE model has a comparable performance as the other official GOCE models.
GOCE Precise Science Orbits for the Entire Mission and their Use for Gravity Field Recovery
NASA Astrophysics Data System (ADS)
Jäggi, Adrian; Bock, Heike; Meyer, Ulrich; Weigelt, Matthias
The Gravity field and steady-state Ocean Circulation Explorer (GOCE), ESA's first Earth Explorer Core Mission, was launched on March 17, 2009 into a sun-synchronous dusk-dawn orbit and re-entered into the Earth's atmosphere on November 11, 2013. It was equipped with a three-axis gravity gradiometer for high-resolution recovery of the Earth's gravity field, as well as with a 12-channel, dual-frequency Global Positioning System (GPS) receiver for precise orbit determination (POD), instrument time-tagging, and the determination of the long wavelength part of the Earth’s gravity field. A precise science orbit (PSO) product was provided during the entire mission by the GOCE High-level Processing Facility (HPF) from the GPS high-low Satellite-to-Satellite Tracking (hl-SST) data. We present the reduced-dynamic and kinematic PSO results for the entire mission period. Orbit comparisons and validations with independent Satellite Laser Ranging (SLR) measurements demonstrate the high quality of both orbit products being close to 2 cm 1-D RMS, but also reveal a correlation between solar activity, GPS data availability, and the quality of the orbits. We use the 1-sec kinematic positions of the GOCE PSO product for gravity field determination and present GPS-only solutions covering the entire mission period. The generated gravity field solutions reveal severe systematic errors centered along the geomagnetic equator, which may be traced back to the GPS carrier phase observations used for the kinematic orbit determination. The nature of the systematic errors is further investigated and reprocessed orbits free of systematic errors along the geomagnetic equator are derived. Eventually, the potential of recovering time variable signals from GOCE kinematic positions is assessed.
Europe's Preparation For GOCE Gravity Field Recovery
NASA Astrophysics Data System (ADS)
Suenkel, H.; Suenkel, H.
2001-12-01
The European Space Agency ESA is preparing for its first dedicated gravity field mission GOCE (Gravity Field and Steady-state Ocean Circulation Explorer) with a proposed launch in fall 2005. The mission's goal is the mapping of the Earth's static gravity field with very high resolution and utmost accuracy on a global scale. GOCE is a drag-free mission, flown in a circular and sun-synchronous orbit at an altitude between 240 and 250 km. Each of the two operational phases will last for 6 months. GOCE is based on a sensor fusion concept combining high-low satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG). The transformation of the GOCE sensor data into a scientific product of utmost quality and reliability requires a well-coordinated effort of experts in satellite geodesy, applied mathematics and computer science. Several research groups in Europe do have this expertise and decided to form the "European GOCE Gravity Consortium (EGG-C)". The EGG-C activities are subdivided into tasks such as standard and product definition, data base and data dissemination, precise orbit determination, global gravity field model solutions and regional solutions, solution validation, communication and documentation, and the interfacing to level 3 product scientific users. The central issue of GOCE data processing is, of course, the determination of the global gravity field model using three independent mathematical-numerical techniques which had been designed and pre-developed in the course of several scientific preparatory studies of ESA: 1. The direct solution which is a least squares adjustment technique based on a pre-conditioned conjugated gradient method (PCGM). The method is capable of efficiently transforming the calibrated and validated SST and SGG observations directly or via lumped coefficients into harmonic coefficients of the gravitational potential. 2. The time-wise approach considers both SST and SGG data as a time series. For an idealized repeat mission such a time series can be very efficiently transformed into lumped coefficients using fast Fourier techniques. For a realistic mission scenario this transformation has to be extended by an iteration process. 3. The space-wise approach which, after having transformed the original observations onto a spatial geographical grid, transforms the pseudo-observations into harmonic coefficients using a fast collocation technique. A successful mission presupposed, GOCE will finally deliver the Earth's gravity field with a resolution of about 70 km half wavelength and a global geoid with an accuracy of about 1 cm.
Using the Full Cycle of GOCE Data in the Quasi-Geoid Modelling of Finland
NASA Astrophysics Data System (ADS)
Saari, Timo; Bilker-Koivula, Mirjam; Poutanen, Markku
2016-08-01
In the Dragon 3 project 10519 "Case study on heterogeneous geoid/quasigeoid based on space borne and terrestrial data combination with special consideration of GOCE mission data impact" we combined the latest GOCE models with the terrestrial gravity data of Finland and surrounding areas to calculate a quasi-geoid model for Finland. Altogether 249 geoid models with different modifications were calculated using the GOCE DIR5 models up to spherical harmonic degree and order 240 and 300 and the EIGEN-6C4 up to degree and order 1000 and 2190.The calculated quasi-geoid models were compared against the ground truth in Finland with two independent GPS-levelling datasets. The best GOCE- only models gave standard deviations of 2.8 cm, 2.6 cm (DIR5 d/o 240) and 2.7 cm, 2.3 cm (DIR5 d/o 300) in Finnish territory for NLS-FIN and EUVN-DA datasets, respectively. For the high resolution model EIGEN-6C4 (which includes the full cycle of the GOCE data), the results were 2.4 cm, 1.8 cm (d/o 1000) and 2.5 cm, 1.7 (d/o 2190). The sub-2-centimetre (and near 2 cm with GOCE-only) accuracy is an improvement over the previous and current Finnish geoid models, thus leading to a conclusion of the great impact of the GOCE- mission on regional geoid modelling.
NASA Astrophysics Data System (ADS)
Othman, A. H.; Omar, K. M.; Din, A. H. M.; Som, Z. A. M.; Yahaya, N. A. Z.; Pa'suya, M. F.
2016-06-01
The GOCE satellite mission has significantly contributed to various applications such as solid earth physics, oceanography and geodesy. Some substantial applications of geodesy are to improve the gravity field knowledge and the precise geoid modelling towards realising global height unification. This paper aims to evaluate GOCE geoid model based on the recent GOCE Global Geopotential Model (GGM), as well as EGM2008, using GPS levelling data over East Malaysia, i.e. Sabah and Sarawak. The satellite GGMs selected in this study are the GOCE GGM models which include GOCE04S, TIM_R5 and SPW_R4, and the EGM2008 model. To assess these models, the geoid heights from these GGMs are compared to the local geometric geoid height. The GGM geoid heights was derived using EGMLAB1 software and the geometric geoid height was computed by available GPS levelling information obtained from the Department Survey and Mapping Malaysia. Generally, the GOCE models performed better than EGM2008 over East Malaysia and the best fit GOCE model for this region is the TIM_R5 model. The TIM_R5 GOCE model demonstrated the lowest R.M.S. of ± 16.5 cm over Sarawak, comparatively. For further improvement, this model should be combined with the local gravity data for optimum geoid modelling over East Malaysia.
Intercontinental height datum connection with GOCE and GPS-levelling data
NASA Astrophysics Data System (ADS)
Gruber, T.; Gerlach, C.; Haagmans, R.
2012-12-01
In this study an attempt is made to establish height system datum connections based upon a gravity field and steady-state ocean circulation explorer (GOCE) gravity field model and a set of global positioning system (GPS) and levelling data. The procedure applied in principle is straightforward. First local geoid heights are obtained point wise from GPS and levelling data. Then the mean of these geoid heights is computed for regions nominally referring to the same height datum. Subsequently, these local mean geoid heights are compared with a mean global geoid from GOCE for the same region. This way one can identify an offset of the local to the global geoid per region. This procedure is applied to a number of regions distributed worldwide. Results show that the vertical datum offset estimates strongly depend on the nature of the omission error, i.e. the signal not represented in the GOCE model. For a smooth gravity field the commission error of GOCE, the quality of the GPS and levelling data and the averaging control the accuracy of the vertical datum offset estimates. In case the omission error does not cancel out in the mean value computation, because of a sub-optimal point distribution or a characteristic behaviour of the omitted part of the geoid signal, one needs to estimate a correction for the omission error from other sources. For areas with dense and high quality ground observations the EGM2008 global model is a good choice to estimate the omission error correction in theses cases. Relative intercontinental height datum offsets are estimated by applying this procedure between the United State of America (USA), Australia and Germany. These are compared to historical values provided in the literature and computed with the same procedure. The results obtained in this study agree on a level of 10 cm to the historical results. The changes mainly can be attributed to the new global geoid information from GOCE, rather than to the ellipsoidal heights or the levelled heights. These historical levelling data are still in use in many countries. This conclusion is supported by other results on the validation of the GOCE models.
NASA Astrophysics Data System (ADS)
Doornbos, E.; Bruinsma, S.; Conde, M.; Forbes, J. M.
2013-12-01
Observations made by the European Space Agency (ESA) Gravity field and Ocean Circulation Explorer (GOCE) satellite have enabled the production of a spin-off product of high resolution and high accuracy data on thermosphere density, derived from aerodynamic analysis of acceleration measurements. In this regard, the mission follows in the footsteps of the earlier accelerometer-carrying gravity missions CHAMP and GRACE. The extremely high accuracy and redundancy of the six accelerometers carried by GOCE in its gravity gradiometer instrument has provided new insights on the performance and calibration of these instruments. Housekeeping data on the activation of the GOCE drag free control thruster, made available by ESA has made the production of the thermosphere data possible. The long duration low altitude of GOCE, enabled by its drag free control system, has ensured the presence of very large aerodynamic accelerations throughout its lifetime. This has been beneficial for the accurate derivation of data on the wind speed encountered by the satellite. We have compared the GOCE density observations with data from CHAMP and GRACE. The crosswind data has been compared with CHAMP observations, as well as ground-based observations, made using Scanning Doppler Imagers in Alaska. Models of the thermosphere can provide a bigger, global picture, required as a background in the interpretation of the local space- and ground-based measurements. The comparison of these different sources of information on thermosphere density and wind, each with their own strengths and weaknesses, can provide scientific insight, as well as inputs for further refinement of the processing algorithms and models that are part of the various techniques. Density and crosswind data derived from GOCE (dusk-dawn) and CHAMP (midnight-noon) satellite accelerometer data, superimposed over HWM07 modelled horizontal wind vectors.
Beyond Currents: The Next Phase in GOCE Oceanographic Research
NASA Astrophysics Data System (ADS)
Bingham, Rory J.; Haines, Keith; Hughes, Chris W.
2015-03-01
GOCE has mapped the surface currents of the world’s oceans in unprecedented detail. What is now required is a concerted effort by the oceanographic community to go beyond currents and exploit these measurements for societal benefit. The aim of this review paper is to explore the ways in which this may be achieved, particularly in relation to ocean modelling. With the final gravity models now released, we begin by reviewing the progress GOCE has in made in measuring the ocean’s mean dynamic topography and associated ocean currents. In the light of this progress, we then examine the important oceanographic questions and technical challenges of societal relevance that can potentially be addressed with the help of the observations GOCE has delivered and outline the benefits their solution could deliver. Benefits may either be direct, through, for example, improved ocean modelling and operational forecasting, or indirect through improved understanding of particular oceanographic processes, such as heat transport by the Atlantic meridional overturning circulation or sea level change. Next we consider the technical challenges that must be overcome in bringing GOCE to bear on these problems. In particular we examine how best to use GOCE error information, this being an especially uncertain, underdeveloped and challenging area of investigation, due largely to the fact that such information has not been previously available to the user community. Finally, we consider measures of success; that is, metrics that can be used to quantify any GOCE-enabled progress that the community makes towards answering these questions. Such metrics are essential for demonstrating progress. Ultimately, with this review paper, we aim to paint a road map that will act as an impetus to the oceanography community to exploit the yet untapped potential of GOCE for scientific understanding and societal benefit.
NASA Astrophysics Data System (ADS)
Andritsanos, Vassilios D.; Vergos, George S.; Grigoriadis, Vassilios N.; Pagounis, Vassilios; Tziavos, Ilias N.
2014-05-01
The Elevation project, funded by the action "Archimedes III - Funding of research groups in T.E.I.", co-financed by the E.U. (European Social Fund) and national funds under the Operational Program "Education and Lifelong Learning 2007-2013" aims mainly to the validation of the Hellenic vertical datum. This validation is carried out over two areas under study, one in Central and another in Northern Greece. During the first stage of the validation process, satellite-only as well as combined satellite-terrestrial models of the Earth's geopotential are used. GOCE and GRACE satellite information is compared against recently measured GPS/Levelling observations at specific benchmarks of the vertical network in Attiki (Central Greece) and Thessaloniki (Northern Greece). A spectral enhancement approach is followed where, given the GOCE/GRACE GGM truncation degree, EGM2008 is used to fill-in the medium and high-frequency content along with RTM effects for the high and ultra high part. The second stage is based on the localization of possible blunders of the vertical network using the spectral information derived previously. The undoubted accuracy of the contemporary global models at the low frequency band leads to some initial conclusions about the consistency of the Hellenic vertical datum.
Improving GOCE cross-track gravity gradients
NASA Astrophysics Data System (ADS)
Siemes, Christian
2018-01-01
The GOCE gravity gradiometer measured highly accurate gravity gradients along the orbit during GOCE's mission lifetime from March 17, 2009, to November 11, 2013. These measurements contain unique information on the gravity field at a spatial resolution of 80 km half wavelength, which is not provided to the same accuracy level by any other satellite mission now and in the foreseeable future. Unfortunately, the gravity gradient in cross-track direction is heavily perturbed in the regions around the geomagnetic poles. We show in this paper that the perturbing effect can be modeled accurately as a quadratic function of the non-gravitational acceleration of the satellite in cross-track direction. Most importantly, we can remove the perturbation from the cross-track gravity gradient to a great extent, which significantly improves the accuracy of the latter and offers opportunities for better scientific exploitation of the GOCE gravity gradient data set.
Preprocessing of gravity gradients at the GOCE high-level processing facility
NASA Astrophysics Data System (ADS)
Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin
2009-07-01
One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.
Optimised Environmental Test Approaches in the GOCE Project
NASA Astrophysics Data System (ADS)
Ancona, V.; Giordano, P.; Casagrande, C.
2004-08-01
The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) is dedicated to measuring the Earth's gravity field and modelling the geoid with extremely high accuracy and spatial resolution. It is the first Earth Explorer Core mission to be developed as part of ESA's Living Planet Programme and is scheduled for launch in 2006. The program is managed by a consortium of European companies: Alenia Spazio, the prime contractor, Astrium GmbH, the platform responsible, Alcatel Space Industries and Laben, suppliers of the main payloads, respectively the Electrostatic Gravity Gradiometer (EGG) and the Satellite to Satellite Tracking Instrument (SSTI), actually a precise GPS receiver. The GOCE Assembly Integration and Verification (AIV) approach is established and implemented in order to demonstrate to the customer that the satellite design meets the applicable requirements and to qualify and accept from lower level up to system level. The driving keywords of "low cost" and "short schedule" program, call for minimizing the development effort by utilizing off-the-shelf equipment combined with a model philosophy lowering the number of models to be used. The paper will deal on the peculiarities of the optimized environmental test approach in the GOCE project. In particular it introduces the logic of the AIV approach and describe the foreseen tests at system level within the SM environmental test campaign, outlining the Quasi Static test performed in the frame of the SM sine vibration tests, and the PFM environmental test campaign pinpointing the deletion of the Sine Vibration test on PFM model. Furthermore the paper highlights how the Model and Test Effectiveness Database (MATD) can be utilized for the prediction of the new space projects like GOCE Satellite.
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Seitz, Kurt; Heck, Bernhard
2017-03-01
National height reference systems have conventionally been linked to the local mean sea level, observed at individual tide gauges. Due to variations in the sea surface topography, the reference levels of these systems are inconsistent, causing height datum offsets of up to ±1-2 m. For the unification of height systems, a satellite-based method is presented that utilizes global geopotential models (GGMs) derived from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE). In this context, height datum offsets are estimated within a least squares adjustment by comparing the GGM information with measured GNSS/leveling data. While the GNSS/leveling data comprises the full spectral information, GOCE GGMs are restricted to long wavelengths according to the maximum degree of their spherical harmonic representation. To provide accurate height datum offsets, it is indispensable to account for the remaining signal above this maximum degree, known as the omission error of the GGM. Therefore, a combination of the GOCE information with the high-resolution Earth Gravitational Model 2008 (EGM2008) is performed. The main contribution of this paper is to analyze the benefit, when high-frequency topography-implied gravity signals are additionally used to reduce the remaining omission error of EGM2008. In terms of a spectral extension, a new method is proposed that does not rely on an assumed spectral consistency of topographic heights and implied gravity as is the case for the residual terrain modeling (RTM) technique. In the first step of this new approach, gravity forward modeling based on tesseroid mass bodies is performed according to the Rock-Water-Ice (RWI) approach. In a second step, the resulting full spectral RWI-based topographic potential values are reduced by the effect of the topographic gravity field model RWI_TOPO_2015, thus, removing the long to medium wavelengths. By using the latest GOCE GGMs, the impact of topography-implied gravity signals on the estimation of height datum offsets is analyzed in detail for representative GNSS/leveling data sets in Germany, Austria, and Brazil. Besides considerable changes in the estimated offset of up to 3 cm, the conducted analyses show that significant improvements of 30-40% can be achieved in terms of a reduced standard deviation and range of the least squares adjusted residuals.
Using the GOCE star trackers for validating the calibration of its accelerometers
NASA Astrophysics Data System (ADS)
Visser, P. N. A. M.
2017-12-01
A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.
NASA Astrophysics Data System (ADS)
Bobojc, Andrzej; Drozyner, Andrzej
2016-04-01
This work contains a comparative study of performance of twenty geopotential models in an orbit estimation process of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. For testing, among others, such models as JYY_GOCE02S, ITG-GOCE02, ULUX_CHAMP2013S, GOGRA02S, ITG-GRACE2010S, EIGEN-51C, EGM2008, EGM96, JGM3, OSU91a, OSU86F were adopted. A special software package, called the Orbital Computation System (OCS), based on the classical method of least squares was used. In the frame of OCS, initial satellite state vector components are corrected in an iterative process, using the given geopotential model and the models describing the remaining gravitational perturbations. An important part of the OCS package is the 8th order Cowell numerical integration procedure, which enables a satellite orbit computation. Different sets of pseudorange simulations along reference GOCE satellite orbital arcs were obtained using real orbits of the Global Positioning System (GPS) satellites. These sets were the basic observation data used in the adjustment. The centimeter-accuracy Precise Science Orbit (PSO) for the GOCE satellite provided by the European Space Agency (ESA) was adopted as the GOCE reference orbit. Comparing various variants of the orbital solutions, the relative accuracy of geopotential models in an orbital aspect is determined. Full geopotential models were used in the adjustment process. However, the solutions were also determined taking into account truncated geopotential models. In such case, an accuracy of the orbit estimated was slightly enhanced. The obtained solutions refer to the orbital arcs with the lengths of 90-minute and 1-day.
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej
2016-12-01
This work contains a comparative study of the performance of six geopotential models in an orbit estimation process of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. For testing, such models as ULUX_CHAMP2013S, ITG-GRACE 2010S, EIGEN-51C, EIGEN5S, EGM2008, EGM96, were adopted. Different sets of pseudo-range simulations along reference GOCE satellite orbital arcs were obtained using real orbits of the Global Positioning System satellites. These sets were the basic observation data used in the adjustment. The centimeter-accuracy Precise Science Orbit (PSO) for the GOCE satellite provided by the European Space Agency (ESA) was adopted as the GOCE reference orbit. Comparing various variants of the orbital solutions, the relative accuracy of geopotential models in an orbital aspect is determined. Full geopotential models were used in the adjustment process. The solutions were also determined taking into account truncated geopotential models. In such case, an accuracy of the solutions was slightly enhanced. Different arc lengths were taken for the computation.
GOCE gravity gradient data for lithospheric modeling and geophysical exploration research
NASA Astrophysics Data System (ADS)
Bouman, Johannes; Ebbing, Jörg; Meekes, Sjef; Lieb, Verena; Fuchs, Martin; Schmidt, Michael; Fattah, Rader Abdul; Gradmann, Sofie; Haagmans, Roger
2013-04-01
GOCE gravity gradient data can improve modeling of the Earth's lithosphere and upper mantle, contributing to a better understanding of the Earth's dynamic processes. We present a method to compute user-friendly GOCE gravity gradient grids at mean satellite altitude, which are easier to use than the original GOCE gradients that are given in a rotating instrument frame. In addition, the GOCE gradients are combined with terrestrial gravity data to obtain high resolution grids of gravity field information close to the Earth's surface. We also present a case study for the North-East Atlantic margin, where we analyze the use of satellite gravity gradients by comparison with a well-constrained 3D density model that provides a detailed picture from the upper mantle to the top basement (base of sediments). We demonstrate how gravity gradients can increase confidence in the modeled structures by calculating the sensitvity of model geometry and applied densities at different observation heights; e.g. satellite height and near surface. Finally, this sensitivity analysis is used as input to study the Rub' al Khali desert in Saudi Arabia. In terms of modeling and data availability this is a frontier area. Here gravity gradient data help especially to set up the regional crustal structure, which in turn allows to refine sedimentary thickness estimates and the regional heat-flow pattern. This can have implications for hydrocarbon exploration in the region.
NASA Astrophysics Data System (ADS)
Vergos, Georgios S.; Grebenitcharsky, Rossen S.; Natsiopoulos, Dimitrios A.; Al-Kherayef, Othman; Al-Muslmani, Bandar
2017-04-01
The availability of a unified and well-established national vertical system and frame is of outmost importance in support of everyday geodetic, surveying and engineering applications. Vertical reference system (VRS) modernization and unification has gained increased importance especially during the last years due to the advent of gravity-field dedicated missions and GOCE in particular, since it is the first time that an unprecedented in accuracy dataset of gravity field functionals has become available at a global scale. The Kingdom of Saudi Arabia VRS is outdated and exhibits significant tilts and biases, so that during the last couple of years an extensive effort has been put forth in order to: re-measure by traditional levelling the entire network, establish new benchmarks (BMs), perform high-quality absolute and relative gravity observations and construct new tide-gauge (TG) stations in both the Arab and Red Seas. The Current work focuses on the combined analysis of the existing, recently collected, terrestrial observations with satellite altimetry data and the latest GOCE-based Earth Geopotential Models (EGMs) in order to provide a pre-definition of the KSA VRS. To that respect, a 30-year satellite altimetry time-series is constructed for each TG station in order to derive both the Mean Sea Level (MSL) as well as the sea level trends. This information is analyzed, through Wavelet (WL) Multi-resolution Analysis (MRA), with the TG sea level records in order to determine annual, semi-annual and secular trends of the Red and Arab Sea variations. Finally, the so-derived trends and MSL are combined with local gravity observations at the TG BMs, levelling offsets between the TGs and the network BMs, levelling observations between the network BMs themselves and GOCE-based EGM-derived geoid heights and potential values. The validation of GOCE contribution and of the satellite altimetry derived MSL and trends is based on a simultaneous adjustment of the entire KSA vertical network, keeping fixed various TG stations and investigating the distortions introduced in the adjusted BM orthometric heights. Finally, a pre-definition of the KSA VRS is detailed as vertical offsets and potential differences δWo relative to the recently adopted conventional zero-level geopotential value by IAG. Conclusions regarding the contribution of satellite altimetry and GOCE are drown along with the necessary information for the definition of the KSA vertical datum and its connection to an International Height References System (IHRS).
Gravity gradient preprocessing at the GOCE HPF
NASA Astrophysics Data System (ADS)
Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.
2009-04-01
One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.
Experiences with GOCE models in SONMICAT-BCN calibration site.
NASA Astrophysics Data System (ADS)
Martinez-Benjamin, J. J.; Termens, A.; Pros, F.
2016-12-01
SONMICAT - the integrated sea level observation system of Catalonia - aims at providing high-quality continous measurements of sea- and land levels at the Catalan coast from tide gauges and from modern geodetic techniques for studies on long-term sea level trends, but also the calibration of satellite altimeters, for instance. This synergy is indeed the only way to get a clear and unambigous picture of what is actually going on at the coast of Catalonia. Actually, there is a gap of sea level data in the coastal area of Catalonia, although several groups have started to do some work. SONMICAT will fill it and, as a goal, will be a regional implementation and densification of the GGOS . In the framework of SONMICAT project, the sea level infrastructure has been improved by providing the harbour of Barcelona with 3 tide gauges and a GPS station nearby. Furthermore, an airborne LiDAR campaign was carried out with two strips along two ICESat target tracks. The work focuses on the comparison between the GOCE gravity field solutions with existing local an regional gravity field models over the area of Barcelona harbour. The study will estimate how GOCE works on SONMICAT-BCN calibration site in order to prepare future geomatics issues .
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2017-09-01
Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.
ESA BRAT (Broadview Radar Altimetry Toolbox) and GUT (GOCE User Toolbox) toolboxes
NASA Astrophysics Data System (ADS)
Benveniste, J.; Ambrozio, A.; Restano, M.
2016-12-01
The Broadview Radar Altimetry Toolbox (BRAT) is a collection of tools designed to facilitate the processing of radar altimetry data from previous and current altimetry missions, including the upcoming Sentinel-3A L1 and L2 products. A tutorial is included providing plenty of use cases. BRAT's future release (4.0.0) is planned for September 2016. Based on the community feedback, the frontend has been further improved and simplified whereas the capability to use BRAT in conjunction with MATLAB/IDL or C/C++/Python/Fortran, allowing users to obtain desired data bypassing the data-formatting hassle, remains unchanged. Several kinds of computations can be done within BRAT involving the combination of data fields, that can be saved for future uses, either by using embedded formulas including those from oceanographic altimetry, or by implementing ad-hoc Python modules created by users to meet their needs. BRAT can also be used to quickly visualise data, or to translate data into other formats, e.g. from NetCDF to raster images. The GOCE User Toolbox (GUT) is a compilation of tools for the use and the analysis of GOCE gravity field models. It facilitates using, viewing and post-processing GOCE L2 data and allows gravity field data, in conjunction and consistently with any other auxiliary data set, to be pre-processed by beginners in gravity field processing, for oceanographic and hydrologic as well as for solid earth applications at both regional and global scales. Hence, GUT facilitates the extensive use of data acquired during GRACE and GOCE missions. In the current 3.0 version, GUT has been outfitted with a graphical user interface allowing users to visually program data processing workflows. Further enhancements aiming at facilitating the use of gradients, the anisotropic diffusive filtering, and the computation of Bouguer and isostatic gravity anomalies have been introduced. Packaged with GUT is also GUT's VCM (Variance-Covariance Matrix) tool for analysing GOCE's variance-covariance matrices. BRAT and GUT toolboxes can be freely downloaded, along with ancillary material, at https://earth.esa.int/brat and https://earth.esa.int/gut.
EUPOS and SLR Contribution to GOCE Mission
NASA Astrophysics Data System (ADS)
Balodis, J.; Caunite, M.; Janpaule, I.; Kenyeres, A.; Rubans, A.; Silabriedis, G.; Rosenthal, G.; Zarinsjh, A.; Zvirgzds, J.; Abel, M.
2010-12-01
After the interest of geodesists from several East European countries on successful use of SAPOS in Germany the European Position Determination System EUPOS® project has been established at 2002 under the leadership of Gerd Rosenthal, Berlin State Department of Urban Development. Currently the ground based GNSS augmentation system EUPOS® sub-networks has been developed successfully in 17 countries and the wish to join has been expressed by several other countries. EUPOS® is widely used in many practical applications. Two proposals - "EUPOS® Contribution to GOCE Mission" (Id 4307), "GOCE Observations using SLR for LEO satellites" (Id 4333), were submitted to ESA when ESA in autumn 2006 invited research people to submit proposals for GOCE mission applications. The report is presented in this article on the work which has been done in EUPOS® community and at the University of Latvia. During last 3 years the EUPOS® sub- networks has been completed (Poland, Lithuania, Slovakia, Bulgaria, they tied to the National levelling networks, detailed system behaviour has been depicted on the bases of EUPOS®-Riga network. The development of the SLR for LEO satellites is presented. Initially it was developed for GOCE spacecraft positioning. However, SLR till now was able to observe satellites at night.
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej; Drożyner, Andrzej; Rzepecka, Zofia
2017-04-01
The work includes the comparison of performance of selected geopotential models in the dynamic orbit estimation of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. This was realized by fitting estimated orbital arcs to the official centimeter-accuracy GOCE kinematic orbit which is provided by the European Space Agency. The Cartesian coordinates of kinematic orbit were treated as observations in the orbit estimation. The initial satellite state vector components were corrected in an iterative process with respect to the J2000.0 inertial reference frame using the given geopotential model, the models describing the remaining gravitational perturbations and the solar radiation pressure. Taking the obtained solutions into account, the RMS values of orbital residuals were computed. These residuals result from the difference between the determined orbit and the reference one - the GOCE kinematic orbit. The performance of selected gravity models was also determined using various orbital arc lengths. Additionally, the RMS fit values were obtained for some gravity models truncated at given degree and order of spherical harmonic coefficients. The advantage of using the kinematic orbit is its independence from any a priori dynamical models. For the research such GOCE-independent gravity models as HUST-Grace2016s, ITU_GRACE16, ITSG-Grace2014s, ITSG-Grace2014k, GGM05S, Tongji-GRACE01, ULUX_CHAMP2013S, ITG-GRACE2010S, EIGEN-51C, EIGEN5S, EGM2008 and EGM96 were adopted.
NASA Astrophysics Data System (ADS)
Janpaule, Inese; Haritonova, Diana; Balodis, Janis; Zarins, Ansis; Silabriedis, Gunars; Kaminskis, Janis
2015-03-01
Development of a digital zenith telescope prototype, improved zenith camera construction and analysis of experimental vertical deflection measurements for the improvement of the Latvian geoid model has been performed at the Institute of Geodesy and Geoinformatics (GGI), University of Latvia. GOCE satellite data was used to compute geoid model for the Riga region, and European gravimetric geoid model EGG97 and 102 data points of GNSS/levelling were used as input data in the calculations of Latvian geoid model.
New geoid of Greenland - a case study of terrain and ice effects, GOCE and local sea level data
NASA Astrophysics Data System (ADS)
Forsberg, R.; Jensen, T.
2014-12-01
Making an accurate geoid model of Greenland has always been a challenge due to the ice sheet and glaciers, and the rough topography and deep fjords in the ice free parts. Terrestrial gravity coverage has for the same reasons been relatively sparse, with an older NRL high-level airborne survey of the interior being the only gravity field data over the interior, and terrain and ice thickness models being insufficient both in terms of resolution and accuracy. This data situation has in the later years changed substantially, first of all due to GOCE, but also due to new DTU-Space and NASA IceBridge airborne gravity, ice thickness data from IceBridge and European airborne measurements, and new terrain models from ASTER, SPOT-5 and digital photogrammetry. In the paper we use all available data to make a new geoid of Greenland and surrounding ocean regions, using remove-restore techniques for ice and topography, spherical FFT techniques and downward continuation by least squares collocation. The impact of GOCE and the new terrestrial data yielded a much improved geoid. Due to the lack of of levelling data connecting scattered towns, the new geoid is validated by local sea level and dynamic ocean topography data, and specially collected GPS-tide gauge profile data along fjords. The comparisons show significant improvements over EGM08 and older geoid models, and also highlight the problems of global sea level models, especially in sea ice covered regions, and the definition of a new consistent vertical datum of Greenland.
NASA Astrophysics Data System (ADS)
Odera, Patroba Achola; Fukuda, Yoichi
2017-09-01
The performance of Gravity field and steady-state Ocean Circulation Explorer (GOCE) global gravity field models (GGMs), at the end of GOCE mission covering 42 months, is evaluated using geoid undulations and free-air gravity anomalies over Japan, including six sub-regions (Hokkaido, north Honshu, central Honshu, west Honshu, Shikoku and Kyushu). Seventeen GOCE-based GGMs are evaluated and compared with EGM2008. The evaluations are carried out at 150, 180, 210, 240 and 270 spherical harmonics degrees. Results show that EGM2008 performs better than GOCE and related GGMs in Japan and three sub-regions (Hokkaido, central Honshu and Kyushu). However, GOCE and related GGMs perform better than EGM2008 in north Honshu, west Honshu and Shikoku up to degree 240. This means that GOCE data can improve geoid model over half of Japan. The improvement is only evident between degrees 150 and 240 beyond which EGM2008 performs better than GOCE GGMs in all the six regions. In general, the latest GOCE GGMs (releases 4 and 5) perform better than the earlier GOCE GGMs (releases 1, 2 and 3), indicating the contribution of data collected by GOCE in the last months before the mission ended on 11 November 2013. The results indicate that a more accurate geoid model over Japan is achievable, based on a combination of GOCE, EGM2008 and terrestrial gravity data sets. [Figure not available: see fulltext. Caption: Standard deviations of the differences between observed and GGMs implied ( a) free-air gravity anomalies over Japan, ( b) geoid undulations over Japan. n represents the spherical harmonic degrees
Moho topography, ranges and folds of Tibet by analysis of global gravity models and GOCE data
Shin, Young Hong; Shum, C.K.; Braitenberg, Carla; Lee, Sang Mook; Na, Sung -Ho; Choi, Kwang Sun; Hsu, Houtse; Park, Young-Sue; Lim, Mutaek
2015-01-01
The determination of the crustal structure is essential in geophysics, as it gives insight into the geohistory, tectonic environment, geohazard mitigation, etc. Here we present the latest advance on three-dimensional modeling representing the Tibetan Mohorovičić discontinuity (topography and ranges) and its deformation (fold), revealed by analyzing gravity data from GOCE mission. Our study shows noticeable advances in estimated Tibetan Moho model which is superior to the results using the earlier gravity models prior to GOCE. The higher quality gravity field of GOCE is reflected in the Moho solution: we find that the Moho is deeper than 65 km, which is twice the normal continental crust beneath most of the Qinghai-Tibetan plateau, while the deepest Moho, up to 82 km, is located in western Tibet. The amplitude of the Moho fold is estimated to be ranging from −9 km to 9 km with a standard deviation of ~2 km. The improved GOCE gravity derived Moho signals reveal a clear directionality of the Moho ranges and Moho fold structure, orthogonal to deformation rates observed by GPS. This geophysical feature, clearly more evident than the ones estimated using earlier gravity models, reveals that it is the result of the large compressional tectonic process. PMID:26114224
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Seitz, Kurt; Heck, Bernhard
2010-05-01
The basic observables of the recently launched satellite gravity gradiometry mission GOCE are the second derivatives of the earth gravitational potential (components of the full Marussi tensor). These gravity gradients are highly sensitive to mass anomalies and mass transports in the earth system. The high- and mid-frequency components of the gradients are mainly affected by the topographic and isostatic masses whereby the downward continuation of the gradients is a rather difficult task. In order to stabilize this process the gradients have to be smoothed by applying topographic and isostatic reductions. In the space domain the modelling of topographic effects is based on the evaluation of functionals of the Newton integral. In the case of GOCE the second-order derivatives are required. Practical numerical computations rely on a discretisation of the earth's topography and a subdivision into different mass elements. Considering geographical gridlines tesseroids (spherical prisms) are well suited for the modelling of the topographic masses. Since the respective volume integrals cannot be solved in an elementary way in the case of tesseroids numerical approaches such as Taylor series expansion, Gauss-Legendre cubature or a point-mass approximation have to be applied. In this paper the topography is represented by the global Digital Terrain Model DTM2006.0 which was also used for the compilation of the Earth Gravitation Model EGM2008. In addition, each grid element of the DTM is classified as land, see or ice providing further information on the density within the evaluation of topographic effects. The computation points are located on a GOCE-like circular orbit. The mass elements are arranged on a spherical earth of constant radius and, in a more realistic composition, on the surface of an ellipsoid of revolution. The results of the modelling of each version are presented and compared to each other with regard to computation time and accuracy. Acknowledgements: This research has been financially supported by the German Federal Ministry of Education and Research (BMBF) within the REAL-GOCE project of the GEOTECHNOLOGIEN Programme.
A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation
NASA Astrophysics Data System (ADS)
Rieser, Daniel; Mayer-Guerr, Torsten
2014-05-01
The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.
GOCE User Toolbox and Tutorial
NASA Astrophysics Data System (ADS)
Benveniste, Jérôme; Knudsen, Per
2016-07-01
The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. Without any doubt the development of the GOCE user toolbox have played a major role in paving the way to successful use of the GOCE data for oceanography. The GUT version 2.2 was released in April 2014 and beside some bug-fixes it adds the capability for the computation of Simple Bouguer Anomaly (Solid-Earth). During this fall a new GUT version 3 has been released. GUTv3 was further developed through a collaborative effort where the scientific communities participate aiming on an implementation of remaining functionalities facilitating a wider span of research in the fields of Geodesy, Oceanography and Solid earth studies. Accordingly, the GUT version 3 has: - An attractive and easy to use Graphic User Interface (GUI) for the toolbox, - Enhance the toolbox with some further software functionalities such as to facilitate the use of gradients, anisotropic diffusive filtering and computation of Bouguer and isostatic gravity anomalies. - An associated GUT VCM tool for analyzing the GOCE variance covariance matrices.
The BRAT and GUT Couple: Broadview Radar Altimetry and GOCE User Toolboxes
NASA Astrophysics Data System (ADS)
Benveniste, J.; Restano, M.; Ambrózio, A.
2017-12-01
The Broadview Radar Altimetry Toolbox (BRAT) is a collection of tools designed to facilitate the processing of radar altimetry data from previous and current altimetry missions, including Sentinel-3A L1 and L2 products. A tutorial is included providing plenty of use cases. BRAT's next release (4.2.0) is planned for October 2017. Based on the community feedback, the front-end has been further improved and simplified whereas the capability to use BRAT in conjunction with MATLAB/IDL or C/C++/Python/Fortran, allowing users to obtain desired data bypassing the data-formatting hassle, remains unchanged. Several kinds of computations can be done within BRAT involving the combination of data fields, that can be saved for future uses, either by using embedded formulas including those from oceanographic altimetry, or by implementing ad-hoc Python modules created by users to meet their needs. BRAT can also be used to quickly visualise data, or to translate data into other formats, e.g. from NetCDF to raster images. The GOCE User Toolbox (GUT) is a compilation of tools for the use and the analysis of GOCE gravity field models. It facilitates using, viewing and post-processing GOCE L2 data and allows gravity field data, in conjunction and consistently with any other auxiliary data set, to be pre-processed by beginners in gravity field processing, for oceanographic and hydrologic as well as for solid earth applications at both regional and global scales. Hence, GUT facilitates the extensive use of data acquired during GRACE and GOCE missions. In the current 3.1 version, GUT has been outfitted with a graphical user interface allowing users to visually program data processing workflows. Further enhancements aiming at facilitating the use of gradients, the anisotropic diffusive filtering, and the computation of Bouguer and isostatic gravity anomalies have been introduced. Packaged with GUT is also GUT's Variance-Covariance Matrix tool (VCM). BRAT and GUT toolboxes can be freely downloaded, along with ancillary material, at https://earth.esa.int/brat and https://earth.esa.int/gut.
NASA Astrophysics Data System (ADS)
Alothman, Abdulaziz; Elsaka, Basem
The gravity field models from the GRACE and GOCE missions have increased the knowledge of the earth’s global gravity field. The latter GOCE mission has provided accuracies of about 1-2 cm and 1milli-Gal level in the global geoid and gravity anomaly, respectively. However, determining all wavelength ranges of the gravity field spectrum cannot be only achieved from satellite gravimetry but from the allowed terrestrial gravity data. In this contribution, we use a gravity network of 42 first-order absolute gravity stations, observed by LaCosta Romberg gravimeter during the period 1967-1969 by Ministry of Petroleum and Mineral Resources, to validate the GOCE gravity models in order to gain more detailed regional gravity information. The network stations are randomly distributed all over the country with a spacing of about 200 km apart. The results show that the geoid height and gravity anomaly determined from terrestrial gravity data agree with the GOCE based models and give additional information to the satellite gravity solutions.
GOCE User Toolbox and Tutorial
NASA Astrophysics Data System (ADS)
Knudsen, Per; Benveniste, Jerome
2017-04-01
The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. Without any doubt the development of the GOCE user toolbox have played a major role in paving the way to successful use of the GOCE data for oceanography. The GUT version 2.2 was released in April 2014 and beside some bug-fixes it adds the capability for the computation of Simple Bouguer Anomaly (Solid-Earth). During this fall a new GUT version 3 has been released. GUTv3 was further developed through a collaborative effort where the scientific communities participate aiming on an implementation of remaining functionalities facilitating a wider span of research in the fields of Geodesy, Oceanography and Solid earth studies. Accordingly, the GUT version 3 has: - An attractive and easy to use Graphic User Interface (GUI) for the toolbox, - Enhance the toolbox with some further software functionalities such as to facilitate the use of gradients, anisotropic diffusive filtering and computation of Bouguer and isostatic gravity anomalies. - An associated GUT VCM tool for analyzing the GOCE variance covariance matrices.
GOCE User Toolbox and Tutorial
NASA Astrophysics Data System (ADS)
Knudsen, Per; Benveniste, Jerome; Team Gut
2016-04-01
The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. Without any doubt the development of the GOCE user toolbox have played a major role in paving the way to successful use of the GOCE data for oceanography. The GUT version 2.2 was released in April 2014 and beside some bug-fixes it adds the capability for the computation of Simple Bouguer Anomaly (Solid-Earth). During this fall a new GUT version 3 has been released. GUTv3 was further developed through a collaborative effort where the scientific communities participate aiming on an implementation of remaining functionalities facilitating a wider span of research in the fields of Geodesy, Oceanography and Solid earth studies. Accordingly, the GUT version 3 has: - An attractive and easy to use Graphic User Interface (GUI) for the toolbox, - Enhance the toolbox with some further software functionalities such as to facilitate the use of gradients, anisotropic diffusive filtering and computation of Bouguer and isostatic gravity anomalies. - An associated GUT VCM tool for analyzing the GOCE variance covariance matrices.
Assessment and Improvement of GOCE based Global Geopotential Models Using Wavelet Decomposition
NASA Astrophysics Data System (ADS)
Erol, Serdar; Erol, Bihter; Serkan Isik, Mustafa
2016-07-01
The contribution of recent Earth gravity field satellite missions, specifically GOCE mission, leads significant improvement in quality of gravity field models in both accuracy and resolution manners. However the performance and quality of each released model vary not only depending on the spatial location of the Earth but also the different bands of the spectral expansion. Therefore the assessment of the global model performances with validations using in situ-data in varying territories on the Earth is essential for clarifying their exact performances in local. Beside of this, their spectral evaluation and quality assessment of the signal in each part of the spherical harmonic expansion spectrum is essential to have a clear decision for the commission error content of the model and determining its optimal degree, revealed the best results, as well. The later analyses provide also a perspective and comparison on the global behavior of the models and opportunity to report the sequential improvement of the models depending on the mission developments and hence the contribution of the new data of missions. In this study a review on spectral assessment results of the recently released GOCE based global geopotential models DIR-R5, TIM-R5 with the enhancement using EGM2008, as reference model, in Turkey, versus the terrestrial data is provided. Beside of reporting the GOCE mission contribution to the models in Turkish territory, the possible improvement in the spectral quality of these models, via decomposition that are highly contaminated by noise, is purposed. In the analyses the motivation is on achieving an optimal amount of improvement that rely on conserving the useful component of the GOCE signal as much as possible, while fusing the filtered GOCE based models with EGM2008 in the appropriate spectral bands. The investigation also contain the assessment of the coherence and the correlation between the Earth gravity field parameters (free-air gravity anomalies and geoid undulations), derived from the validated geopotential models and terrestrial data (GPS/leveling, terrestrial gravity observations, DTM etc.), as well as the WGM2012 products. In the conclusion, with the numerical results, the performance of the assessed models are clarified in Turkish territory and the potential of the Wavelet decomposition in the improvement of the geopotential models is verified.
GOCE: Mission Overview and Early Results (Invited)
NASA Astrophysics Data System (ADS)
Rummel, R. F.; Muzi, D.; Drinkwater, M. R.; Floberghagen, R.; Fehringer, M.
2009-12-01
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) mission is the first Earth Explorer Core mission of the Living Planet Programme of the European Space Agency (ESA). The primary objective of the GOCE mission is to provide global and regional models of the Earth gravity field and the geoid, its reference equi-potential surface, with unprecedented spatial resolution and accuracy. GOCE was launched successfully on 17 March 2009 from the Plesetsk Cosmodrome in northern Russia onboard a Rockot launch vehicle. System commissioning and payload calibration have been completed and the satellite is decaying to its initial measurement operating altitude of 255 km, which is expected to be reached in mid-September 2009. After one week of final payload calibration, GOCE will enter its first 6 month duration phase of uninterrupted science measurements at that altitude. This presentation will recall GOCE's main goals and its major development milestones. In addition, a description of the data products generated and some highlights of the satellite performance will be outlined. Artist's impression of GOCE Satellite in flight (courtesy AOES-Medialab).
Precise Orbit Determination of the GOCE Re-Entry Phase
NASA Astrophysics Data System (ADS)
Gini, Francesco; Otten, Michiel; Springer, Tim; Enderle, Werner; Lemmens, Stijn; Flohrer, Tim
2015-03-01
During the last days of the GOCE mission, after the GOCE spacecraft ran out of fuel, it slowly decayed before finally re-entering the atmosphere on the 11th November 2013. As an integrated part of the AOCS, GOCE carried a GPS receiver that was in operations during the re-entry phase. This feature provided a unique opportunity for Precise Orbit Determination (POD) analysis. As part of the activities carried out by the Navigation Support Office (HSO-GN) at ESOC, precise ephemerides of the GOCE satellite have been reconstructed for the entire re-entry phase based on the available GPS observations of the onboard LAGRANGE receiver. All the data available from the moment the thruster was switched off on the 21st of October 2013 to the last available telemetry downlink on the 10th November 2013 have been processed, for a total of 21 daily arcs. For this period a dedicated processing sequence has been defined and implemented within the ESA/ESOC NAvigation Package for Earth Observation Satellites (NAPEOS) software. The computed results show a post-fit RMS of the GPS undifferenced carrier phase residuals (ionospheric-free linear combination) between 6 and 14 mm for the first 16 days which then progressively increases up to about 80 mm for the last available days. An orbit comparison with the Precise Science Orbits (PSO) generated at the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland) shows an average difference around 9 cm for the first 8 daily arcs and progressively increasing up to 17 cm for the following days. During this reentry phase (21st of October - 10th November 2013) a substantial drop in the GOCE altitude is observed, starting from about 230 km to 130 km where the last GPS measurements were taken. During this orbital decay an increment of a factor of 100 in the aerodynamic acceleration profile is observed. In order to limit the mis-modelling of the non-gravitational forces (radiation pressure and aerodynamic effects) the newly developed software ARPA (Aerodynamics and Radiation Pressure Analysis) has been adopted to compute the forces acting on GOCE. An overview of the software techniques and the results of its implementation is presented in this paper. The use of the ARPA modelling leads to an average reduction of the carrier phase post-fit RMS of about 2 mm and decrement of the difference with the PSO orbits of more than 1 cm.
NASA Astrophysics Data System (ADS)
Forsberg, R.; Olesen, A. V.
2013-12-01
DTU-Space has since many years carried out large area airborne surveys over both polar, tropical and temperate regions, especially for geoid determination and global geopotential models. Recently we have started flying two gravimeters (LCR and Chekan-AM) side by side for increased reliability and redundancy. Typical gravity results are at the 2 mGal rms level, translating into 5-10 cm accuracy in geoid. However, in rough mountainous areas results can be more noisy, mainly due to long-period mountain waves and turbulence. In the paper we outline results of recent challenging campaigns in Nepal (2010) and Antarctica (Antarctic Peninsula and East Antarctica, 2010-13). The latest Antarctic campaign 2012/13, carried out in cooperation with the British Antarctic Survey, Norwegian Polar Institute, and the Argentine Antarctic Institute, involved air drops of fuel to a remote field camp in the Recovery Lakes region, one of the least explored region of deep interior Antarctica. The airborne data collected are validated by cross-over comparisons and comparisons to independent data (IceBridge), and serve at the same time as an independent validation of GOCE satellite gravity data, confirming the satellite data to contain information at half-wavelengths down to 80 km. With no bias between the airborne data and GOCE, airborne gravimetry is perfectly suited to cover the GOCE data gap south of 83 S. We recommend an international, coordinated airborne gravity effort should be carried out over the south polar gap as soon as possible, to ensure a uniform global accuracy of GOCE heritage future geopotential models.
NASA Astrophysics Data System (ADS)
Amjadiparvar, Babak; Sideris, Michael
2015-04-01
Precise gravimetric geoid heights are required when the unification of vertical datums is performed using the Geodetic Boundary Value Problem (GBVP) approach. Five generations of Global Geopotential Models (GGMs) derived from Gravity field and steady-state Ocean Circulation Explorer (GOCE) observations have been computed and released so far (available via IAG's International Centre for Global Earth Models, ICGEM, http://icgem.gfz-potsdam.de/ICGEM/). The performance of many of these models with respect to geoid determination has been studied in order to select the best performing model to be used in height datum unification in North America. More specifically, Release-3, 4 and 5 of the GOCE-based global geopotential models have been evaluated using GNSS-levelling data as independent control values. Comparisons against EGM2008 show that each successive release improves upon the previous one, with Release-5 models showing an improvement over EGM2008 in Canada and CONUS between spherical harmonic degrees 100 and 210. In Alaska and Mexico, a considerable improvement over EGM2008 was brought by the Release-5 models when used up to spherical harmonic degrees of 250 and 280, respectively. The positive impact of the Release-5 models was also felt when a gravimetric geoid was computed using the GOCE-based GGMs together with gravity and topography data in Canada. This geoid model, with appropriately modified Stokes kernel between spherical harmonic degrees 190 and 260, performed better than the official Canadian gravimetric geoid model CGG2013, thus illustrating the advantages of using the latest release GOCE-based models for vertical datum unification in North America.
GOCE, Satellite Gravimetry and Antarctic Mass Transports
NASA Astrophysics Data System (ADS)
Rummel, Reiner; Horwath, Martin; Yi, Weiyong; Albertella, Alberta; Bosch, Wolfgang; Haagmans, Roger
2011-09-01
In 2009 the European Space Agency satellite mission GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) was launched. Its objectives are the precise and detailed determination of the Earth's gravity field and geoid. Its core instrument, a three axis gravitational gradiometer, measures the gravity gradient components V xx , V yy , V zz and V xz (second-order derivatives of the gravity potential V) with high precision and V xy , V yz with low precision, all in the instrument reference frame. The long wavelength gravity field is recovered from the orbit, measured by GPS (Global Positioning System). Characteristic elements of the mission are precise star tracking, a Sun-synchronous and very low (260 km) orbit, angular control by magnetic torquing and an extremely stiff and thermally stable instrument environment. GOCE is complementary to GRACE (Gravity Recovery and Climate Experiment), another satellite gravity mission, launched in 2002. While GRACE is designed to measure temporal gravity variations, albeit with limited spatial resolution, GOCE is aiming at maximum spatial resolution, at the expense of accuracy at large spatial scales. Thus, GOCE will not provide temporal variations but is tailored to the recovery of the fine scales of the stationary field. GRACE is very successful in delivering time series of large-scale mass changes of the Antarctic ice sheet, among other things. Currently, emphasis of respective GRACE analyses is on regional refinement and on changes of temporal trends. One of the challenges is the separation of ice mass changes from glacial isostatic adjustment. Already from a few months of GOCE data, detailed gravity gradients can be recovered. They are presented here for the area of Antarctica. As one application, GOCE gravity gradients are an important addition to the sparse gravity data of Antarctica. They will help studies of the crustal and lithospheric field. A second area of application is ocean circulation. The geoid surface from the gravity field model GOCO01S allows us now to generate rather detailed maps of the mean dynamic ocean topography and of geostrophic flow velocities in the region of the Antarctic Circumpolar Current.
NASA Astrophysics Data System (ADS)
Lu, Biao; Luo, Zhicai; Zhong, Bo; Zhou, Hao; Flechtner, Frank; Förste, Christoph; Barthelmes, Franz; Zhou, Rui
2017-11-01
Based on tensor theory, three invariants of the gravitational gradient tensor (IGGT) are independent of the gradiometer reference frame (GRF). Compared to traditional methods for calculation of gravity field models based on the gravity field and steady-state ocean circulation explorer (GOCE) data, which are affected by errors in the attitude indicator, using IGGT and least squares method avoids the problem of inaccurate rotation matrices. The IGGT approach as studied in this paper is a quadratic function of the gravity field model's spherical harmonic coefficients. The linearized observation equations for the least squares method are obtained using a Taylor expansion, and the weighting equation is derived using the law of error propagation. We also investigate the linearization errors using existing gravity field models and find that this error can be ignored since the used a-priori model EIGEN-5C is sufficiently accurate. One problem when using this approach is that it needs all six independent gravitational gradients (GGs), but the components V_{xy} and V_{yz} of GOCE are worse due to the non-sensitive axes of the GOCE gradiometer. Therefore, we use synthetic GGs for both inaccurate gravitational gradient components derived from the a-priori gravity field model EIGEN-5C. Another problem is that the GOCE GGs are measured in a band-limited manner. Therefore, a forward and backward finite impulse response band-pass filter is applied to the data, which can also eliminate filter caused phase change. The spherical cap regularization approach (SCRA) and the Kaula rule are then applied to solve the polar gap problem caused by GOCE's inclination of 96.7° . With the techniques described above, a degree/order 240 gravity field model called IGGT_R1 is computed. Since the synthetic components of V_{xy} and V_{yz} are not band-pass filtered, the signals outside the measurement bandwidth are replaced by the a-priori model EIGEN-5C. Therefore, this model is practically a combined gravity field model which contains GOCE GGs signals and long wavelength signals from the a-priori model EIGEN-5C. Finally, IGGT_R1's accuracy is evaluated by comparison with other gravity field models in terms of difference degree amplitudes, the geostrophic velocity in the Agulhas current area, gravity anomaly differences as well as by comparison to GNSS/leveling data.
Satellite gravity gradient grids for geophysics
Bouman, Johannes; Ebbing, Jörg; Fuchs, Martin; Sebera, Josef; Lieb, Verena; Szwillus, Wolfgang; Haagmans, Roger; Novak, Pavel
2016-01-01
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite aimed at determining the Earth’s mean gravity field. GOCE delivered gravity gradients containing directional information, which are complicated to use because of their error characteristics and because they are given in a rotating instrument frame indirectly related to the Earth. We compute gravity gradients in grids at 225 km and 255 km altitude above the reference ellipsoid corresponding to the GOCE nominal and lower orbit phases respectively, and find that the grids may contain additional high-frequency content compared with GOCE-based global models. We discuss the gradient sensitivity for crustal depth slices using a 3D lithospheric model of the North-East Atlantic region, which shows that the depth sensitivity differs from gradient to gradient. In addition, the relative signal power for the individual gradient component changes comparing the 225 km and 255 km grids, implying that using all components at different heights reduces parameter uncertainties in geophysical modelling. Furthermore, since gravity gradients contain complementary information to gravity, we foresee the use of the grids in a wide range of applications from lithospheric modelling to studies on dynamic topography, and glacial isostatic adjustment, to bedrock geometry determination under ice sheets. PMID:26864314
Results from the ESA-funded project 'Height System Unification with GOCE'
NASA Astrophysics Data System (ADS)
Sideris, M. G.; Rangelova, E. V.; Gruber, T.; Rummel, R. F.; Woodworth, P. L.; Hughes, C. W.; Ihde, J.; Liebsch, G.; Schäfer, U.; Rülke, A.; Gerlach, C.; Haagmans, R.
2013-12-01
The paper summarizes the main results of a project, supported by the European Space Agency, whose main goal is to identify the impact of GOCE gravity field models on height system unification. In particular, the Technical University Munich, the University of Calgary and the National Oceanography Centre in Liverpool, together with the Bavarian Academy of Sciences, the Federal German Agency for Cartography and Geodesy, and the Geodetic Surveys of Canada, USA and Mexico, have investigated the role of GOCE-derived gravity and geoid models for regional and global height datum connection. GOCE provides three important components of height unification: highly accurate potential differences (geopotential numbers), a global geoid- or quasi-geoid-based reference surface for elevations that is independent of inaccuracies and inconsistencies of local and regional data, and a consistent way to refer to the same datum all the relevant gravimetric, topographic and oceanographic data. We introduce briefly the methodology that has been applied in order to unify height system in North America, North Atlantic Ocean and Europe, and present results obtained using the available GOCE-derived satellite-only geopotential models, and their combination with terrestrial data and ocean models. The effects of various factors, such as data noise, omission errors, indirect bias terms, ocean models and temporal variations, on height datum unification are also presented, highlighting their magnitude and importance in the estimation of offsets between vertical datums. Based on the experiences gained in this project, a general roadmap has been developed for height datum unification in regions with good, as well as poor, coverage in gravity and geodetic height and tide gauge control stations.
Weathering the Storm - GOCE Flight Operations in 2010
NASA Astrophysics Data System (ADS)
Steiger, C.; Da Costa, A.; Floberghagen, R.; Fehringer, M.; Emanuelli, P. P.
2011-07-01
ESA's Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) was successfully launched on 17th March 2009. The mission is controlled by ESA's European Space Operations Centre (ESOC) in Darmstadt, Germany. Following completion of commissioning, routine operations started in September 2009, keeping the S/C in drag-free mode at an altitude of 259.6 km. Operations are driven by the unique aspects of the mission, in particular the very low altitude and the high complexity of GOCE's drag- free control system. Following a general introduction, the main focus is put on the special events of 2010, when science operations were interrupted for several months due to problems with the main platform computer. These anomalies presented a major challenge, requiring to operate the spacecraft "in the blind" with no status information available, and extensive modifications of the on-board software to recover the mission.
GOCE SSTI GNSS Receiver Re-Entry Phase Analysis
NASA Astrophysics Data System (ADS)
Zin, A.; Zago, S.; Scaciga, L.; Marradi, L.; Floberghagen, R.; Fehringer, M.; Bigazzi, A.; Piccolo, A.; Luini, L.
2015-03-01
Gravity field and Ocean Circulation Explorer (GOCE) was an ESA Earth Explorer mission dedicated to the measure of the Earth Gravity field. The Spacecraft has been launched in 2009 and the re-entry in atmosphere happened at the end of 2013 [1]. The mean orbit altitude was set to 260 km to maximize the ultra-sensitive accelerometers on board. GOCE was equipped with two main payloads: the Electrostatic Gravity Gradiometer (EGG), a set of six 3-axis accelerometers able to measure the gravity field with unrivalled precision and then to produce the most accurate shape of the ‘geoid’ and two GPS receivers (nominal and redundant), used as a Satellite-to-Satellite Tracking Instrument (SSTI) to geolocate the gradiometer measurements and to measure the long wavelength components of the gravity field with an accuracy never reached before. Previous analyses have shown that the Precise Orbit Determination (POD) of the GOCE satellite, derived by processing the dual-frequency SSTI data (carrier phases and pseudoranges) are at the “state-of-art” of the GPS based POD: kinematic Orbits Average of daily 3D-RMS is 2,06 cm [2]. In most cases the overall accuracy is better than 2 cm 3D RMS. Moreover, the “almost continuous” [2] 1 Hz data availability from the SSTI receiver is unique and allows for a time series of kinematic positions with only 0.5% of missing epochs [2]. In October 2013 GOCE mission was concluded and in November the GOCE spacecraft re-entered in the atmosphere. During the re-entry phase the two SSTI receivers have been switched on simultaneously in order to maximize the data availability. In summer 2013, the SSTI firmware was tailored in order to sustain additional dynamic error (tracking loops robustness), expected during the re-entry phase. The SW was uploaded on SSTI-B (and purposely not on SSTI-A). Therefore this was an unique opportunity to compare a “standard” receiver behaviour (SSTI-A) with an improved one (SSTI-B) in the challenging reentry phase. This paper focuses on the analysis of the data from summer 2013 up to the re-entry phase in November 2013.
NASA Astrophysics Data System (ADS)
Wu, Y.; Luo, Z.; Zhou, H.; Xu, C.
2017-12-01
Regional gravity field recovery is of great importance for understanding ocean circulation and currents in oceanography and investigating the structure of the lithosphere in geophysics. Under the framework of remove-compute-restore methodology (RCR), a regional approach using spherical radial basis functions (SRBFs) is set up for gravity field determination using the GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) gravity gradient tensor, heterogeneous gravimetry and altimetry measurements. The additional value on regional model introduced by GOCE data is validated and quantified. Numerical experiments in a western European region show that the effects introduced by GOCE data display as long-wavelength patterns on the centimeter scale in terms of quasi-geoid heights, which may allow to highlight and reduce the remaining long-wavelength errors and biases in ground-based data and improve the regional model. The accuracy of the gravimetric quasi-geoid computed with a combination of three diagonal components is improved by 0.6 cm (0.5 cm) in the Netherlands (Belgium), compared to that derived from gravimetry and altimetry data alone, when GOCO05s is used as the reference model. Performances of different diagonal components and their combinations are not identical; the solution with vertical gradients shows highest quality when a single component is used. Incorporation of multiple components further improves the model, and the combination of three components shows the best fit to GPS/leveling data. Moreover, the contributions introduced by different components are heterogeneous in terms of spatial coverage and magnitude, although similar structures occur in the spatial domain. Contributions introduced by the vertical components have the most significant effects when a single component is applied. Combination of multiple components further magnifies these effects and improves the solutions, and the incorporation of three components has the most prominent effects. This work is supported by the State Scholarship Fund from Chinese Scholarship Council (201306270014), China Postdoctoral Science Foundation (No.2016M602301), and the National Natural Science Foundation of China (No. 41374023).
Using the full tensor of GOCE gravity gradients for regional gravity field modelling
NASA Astrophysics Data System (ADS)
Lieb, Verena; Bouman, Johannes; Dettmering, Denise; Fuchs, Martin; Schmidt, Michael
2013-04-01
With its 3-axis gradiometer GOCE delivers 3-dimensional (3D) information of the Earth's gravity field. This essential advantage - e.g. compared with the 1D gravity field information from GRACE - can be used for research on the Earth's interior and for geophysical exploration. To benefit from this multidimensional measurement system, the combination of all 6 GOCE gradients and additionally the consistent combination with other gravity observations mean an innovative challenge for regional gravity field modelling. As the individual gravity gradients reflect the gravity field depending on different spatial directions, observation equations are formulated separately for each of these components. In our approach we use spherical localizing base functions to display the gravity field for specified regions. Therefore the series expansions based on Legendre polynomials have to be adopted to obtain mathematical expressions for the second derivatives of the gravitational potential which are observed by GOCE in the Cartesian Gradiometer Reference Frame (GRF). We (1) have to transform the equations from the spherical terrestrial into a Cartesian Local North-Oriented Reference Frame (LNOF), (2) to set up a 3x3 tensor of observation equations and (3) finally to rotate the tensor defined in the terrestrial LNOF into the GRF. Thus we ensure the use of the original non-rotated and unaffected GOCE measurements within the analysis procedure. As output from the synthesis procedure we then obtain the second derivatives of the gravitational potential for all combinations of the xyz Cartesian coordinates in the LNOF. Further the implementation of variance component estimation provides a flexible tool to diversify the influence of the input gradiometer observations. On the one hand the less accurate xy and yz measurements are nearly excluded by estimating large variance components. On the other hand the yy measurements, which show systematic errors increasing at high latitudes, could be manually down-weighted in the corresponding regions. We choose different test areas to compute regional gravity field models at mean GOCE altitudes for different spectral resolutions and varying relative weights for the observations. Further we compare the regional models with the static global GOCO03S model. Especially the flexible handling and combination of the 3D measurements promise a great benefit for geophysical applications from GOCE gravity gradients, as they contain information on radial as well as on lateral gravity changes.
NASA Astrophysics Data System (ADS)
Fecher, T.; Pail, R.; Gruber, T.
2017-05-01
GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.
Tidal Signals In GOCE Measurements And Time-GCM
NASA Astrophysics Data System (ADS)
Hausler, K.; Hagan, M. E.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.
2013-12-01
In this paper we investigate tidal signatures in GOCE measurements during 15-24 November 2009 and complementary simulations with the Thermosphere-Ionosphere- Mesosphere-Electrodynamics General Circulation Model (TIME-GCM). The TIME-GCM simulations are driven by inputs that represent the prevailing solar and geomagnetic conditions along with tidal and planetary waves applied at the lower boundary (ca. 30km). For this pilot study, the resultant TIME-GCM densities are analyzed in two ways: 1) we use results along the GOCE orbital track, to calculate ascending/descending orbit longitude- latitude density difference and sum maps for direct comparison with the GOCE diagnostics, and 2) we conduct a complete analysis of TIME-GCM results to unambiguously characterize the simulated atmospheric tides and to attribute the observed longitude variations to specific tidal components. TIME-GCM captures some but not all of the observed longitudinal variability. The good data- model agreement for wave-2, wave-3, and wave-4 suggests that thermospheric impacts can be attributed to the DE1, DE2, DE3, S0, SE1, and SE2 tides. Discrepancies between TIME-GCM and GOCE results are most prominent in the wave-1 variations, and suggest that further refinement of the lower boundary forcing is necessary before we extend our analysis and interpretation to densities associated with the remainder of the GOCE mission.
Effects of space weather on GOCE electrostatic gravity gradiometer measurements
NASA Astrophysics Data System (ADS)
Ince, E. Sinem; Pagiatakis, Spiros D.
2016-12-01
We examine the presence of residual nongravitational signatures in gravitational gradients measured by GOCE electrostatic gravity gradiometer. These signatures are observed over the magnetic poles during geomagnetically active days and can contaminate the trace of the gravitational gradient tensor by up to three to five times the expected noise level of the instrument (˜ 11 mE). We investigate these anomalies in the gradiometer measurements along many satellite tracks and examine possible causes using external datasets, such as interplanetary electric field measurements from the ACE (advanced composition explorer) and WIND spacecraft, and Poynting vector (flux) estimated from equivalent ionospheric currents derived from spherical elementary current systems over North America and Greenland. We show that the variations in the east-west and vertical electrical currents and Poynting vector components at the satellite position are highly correlated with the disturbances observed in the gradiometer measurements. The results presented in this paper reveal that the disturbances are due to intense ionospheric current variations that are enhanced by increased solar activity that causes a very dynamic drag environment. Moreover, successful modelling and removal of a high percentage of these disturbances are possible using external geomagnetic field observations.
The Space-Wise Global Gravity Model from GOCE Nominal Mission Data
NASA Astrophysics Data System (ADS)
Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.
2011-12-01
In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.
Matching Lithosphere velocity changes to the GOCE gravity signal
NASA Astrophysics Data System (ADS)
Braitenberg, Carla
2016-07-01
Authors: Carla Braitenberg, Patrizia Mariani, Alberto Pastorutti Department of Mathematics and Geosciences, University of Trieste Via Weiss 1, 34100 Trieste Seismic tomography models result in 3D velocity models of lithosphere and sublithospheric mantle, which are due to mineralogic compositional changes and variations in the thermal gradient. The assignment of density is non-univocal and can lead to inverted density changes with respect to velocity changes, depending on composition and temperature. Velocity changes due to temperature result in a proportional density change, whereas changes due to compositional changes and age of the lithosphere can lead to density changes of inverted sign. The relation between velocity and density implies changes in the lithosphere rigidity. We analyze the GOCE gradient fields and the velocity models jointly, making simulations on thermal and compositional density changes, using the velocity models as constraint on lithosphere geometry. The correlations are enhanced by applying geodynamic plate reconstructions to the GOCE gravity field and the tomography models which places today's observed fields at the Gondwana pre-breakup position. We find that the lithosphere geometry is a controlling factor on the overlying geologic elements, defining the regions where rifting and collision alternate and repeat through time. The study is carried out globally, with focus on the conjugate margins of the African and South American continents. The background for the study can be found in the following publications where the techniques which have been used are described: Braitenberg, C., Mariani, P. and De Min, A. (2013). The European Alps and nearby orogenic belts sensed by GOCE, Boll. Bollettino di Geofisica Teorica ed Applicata, 54(4), 321-334. doi:10.4430/bgta0105---- Braitenberg, C. and Mariani, P. (2015). Geological implications from complete Gondwana GOCE-products reconstructions and link to lithospheric roots. Proceedings of 5th International GOCE User Workshop, 25 - 28 November 2014.---- Braitenberg, C. (2015). Exploration of tectonic structures with GOCE in Africa and across-continents. Int. J.Appl. Earth Observ. Geoinf. 35, 88-95. http://dx.doi.org/10.1016/j.jag.2014.01.013------ Braitenberg, C. (2015). A grip on geological units with GOCE, IAG Symp. 141
NASA Astrophysics Data System (ADS)
Douch, Karim; Wu, Hu; Schubert, Christian; Müller, Jürgen; Pereira dos Santos, Franck
2018-03-01
The prospects of future satellite gravimetry missions to sustain a continuous and improved observation of the gravitational field have stimulated studies of new concepts of space inertial sensors with potentially improved precision and stability. This is in particular the case for cold-atom interferometry (CAI) gradiometry which is the object of this paper. The performance of a specific CAI gradiometer design is studied here in terms of quality of the recovered gravity field through a closed-loop numerical simulation of the measurement and processing workflow. First we show that mapping the time-variable field on a monthly basis would require a noise level below 5mE /√{Hz } . The mission scenarios are therefore focused on the static field, like GOCE. Second, the stringent requirement on the angular velocity of a one-arm gradiometer, which must not exceed 10-6 rad/s, leads to two possible modes of operation of the CAI gradiometer: the nadir and the quasi-inertial mode. In the nadir mode, which corresponds to the usual Earth-pointing satellite attitude, only the gradient Vyy , along the cross-track direction, is measured. In the quasi-inertial mode, the satellite attitude is approximately constant in the inertial reference frame and the 3 diagonal gradients Vxx,Vyy and Vzz are measured. Both modes are successively simulated for a 239 km altitude orbit and the error on the recovered gravity models eventually compared to GOCE solutions. We conclude that for the specific CAI gradiometer design assumed in this paper, only the quasi-inertial mode scenario would be able to significantly outperform GOCE results at the cost of technically challenging requirements on the orbit and attitude control.
Simultaneous Observations of TADs in GOCE, CHAMP and GRACE Density Data Compared with CTIPe
NASA Astrophysics Data System (ADS)
Bruinsma, S. L.; Fedrizzi, M.
2012-12-01
The accelerometers on the CHAMP and GRACE satellites have made it possible to accumulate near-continuous records of thermosphere density between about 300 and 490 km since May 2001, and July 2002, respectively. Since November 2009, a third gravity field satellite mission, ESA's GOCE, is in a very low and near heliosynchronous dawn-dusk orbit at about 270 km. The spacecraft is actively maintained at that constant altitude using an ion propulsion engine that compensates the aerodynamic drag in the flight direction. The thrust level, combined with accelerometer and satellite attitude data, is used to compute atmospheric densities and cross-track winds. The response of the thermosphere to geomagnetic disturbances, i.e., space weather, has been extensively studied using the exceptional datasets of CHAMP and GRACE. Thanks to GOCE we now have a third excellent data set for these studies. In this presentation we will show the observed density and its variability for the geomagnetic storm of 5 April 2010, and compare it with predictions along the orbits obtained from a self-consistent physics-based coupled model of the thermosphere, ionosphere, plasmasphere and electrodynamics (CTIPe). For this storm, the CHAMP and GOCE orbit planes were perpendicular (12/24 Local Solar Time, and 6/18 LST, respectively) and the altitude difference was only approximately 30 km. The GRACE densities are at a much higher altitude of about 475 km. Wave-like features are revealed or enhanced after filtering of the densities and calculation of relative density variations. Traveling Atmospheric Disturbances are observed in the data, and the model's fidelity in reproducing the waves is evaluated.
NASA Astrophysics Data System (ADS)
Vergos, Georgios S.; Erol, Bihter; Natsiopoulos, Dimitrios A.; Grigoriadis, Vassilios N.; Serkan Işık, Mustafa; Tziavos, Ilias N.
2016-04-01
The unification of local vertical Datums (LVDs) at a country-wide scale has gained significant attention lately, due to the availability of GOCE-based Global Geopotential Models (GGMs). The latter, offer unprecedented geoid height accuracies at the 1-1.5 cm level for spherical harmonic expansions to d/o 225-230. Within a single country, several LVDs may be used, especially in the event of islandic nations, therefore the unification of all of them to a single nation-wide LVD is of utmost importance. The same holds for neighboring countries, where the unification of their vertical datums is necessary as a tool of engineering, cross-border collaboration and environmental and risk management projects. The aforementioned set the main scope of the work carried out in the frame of the present study, which referred to the use of GOCE and GOCE/GRACE GGMs in order to unify the LVDs of Greece and Turkey. It is well-known that the two countries share common borders and are a path for large-scale engineering projects in the energy sector. Therefore, the availability of a common reference for orthometric heights in both countries and/or the determination of the relative offset of their individual zero-level geopotential value poses an emerging issue. The determination of the geopotential value Wo(LVD) for the Greek and Turkish LVDs was first carried out separately for each region performing as well different estimates for the marine area of the Aegean Sea and the terrestrial border-region along eastern Thrace. From that, possible biases of the Hellenic and Turkish LVDs themselves have been drawn and analyzed to determine spatial correlations. Then, the relative offset between the two LVDs was determined employing GPS/Levelling data for both areas and the latest GO-DIR-R5, GO-TIM-R5 and GOCO05s models as well as EGM2008. The estimation of the mean offset was used to provide as well a direct link between the Greek and Turkish LVDs with the IAG conventional value recently proposed as a Wo for a global WHS.
GOCE User Toolbox and Tutorial
NASA Astrophysics Data System (ADS)
Knudsen, P.; Benveniste, J.
2011-07-01
The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. GUT has been developed in a collaboration within the GUT Core Group. The GUT Core Group: S. Dinardo, D. Serpe, B.M. Lucas, R. Floberghagen, A. Horvath (ESA), O. Andersen, M. Herceg (DTU), M.-H. Rio, S. Mulet, G. Larnicol (CLS), J. Johannessen, L.Bertino (NERSC), H. Snaith, P. Challenor (NOC), K. Haines, D. Bretherton (NCEO), C. Hughes (POL), R.J. Bingham (NU), G. Balmino, S. Niemeijer, I. Price, L. Cornejo (S&T), M. Diament, I Panet (IPGP), C.C. Tscherning (KU), D. Stammer, F. Siegismund (UH), T. Gruber (TUM),
The GOCE end-to-end system simulator
NASA Astrophysics Data System (ADS)
Catastini, G.; Cesare, S.; de Sanctis, S.; Detoma, E.; Dumontel, M.; Floberghagen, R.; Parisch, M.; Sechi, G.; Anselmi, A.
2003-04-01
The idea of an end-to-end simulator was conceived in the early stages of the GOCE programme, as an essential tool for assessing the satellite system performance, that cannot be fully tested on the ground. The simulator in its present form is under development at Alenia Spazio for ESA since the beginning of Phase B and is being used for checking the consistency of the spacecraft and of the payload specifications with the overall system requirements, supporting trade-off, sensitivity and worst-case analyses, and preparing and testing the on-ground and in-flight calibration concepts. The software simulates the GOCE flight along an orbit resulting from the application of Earth's gravity field, non-conservative environmental disturbances (atmospheric drag, coupling with Earth's magnetic field, etc.) and control forces/torques. The drag free control forces as well as the attitude control torques are generated by the current design of the dedicated algorithms. Realistic sensor models (star tracker, GPS receiver and gravity gradiometer) feed the control algorithms and the commanded forces are applied through realistic thruster models. The output of this stage of the simulator is a time series of Level-0 data, namely the gradiometer raw measurements and spacecraft ancillary data. The next stage of the simulator transforms Level-0 data into Level-1b (gravity gradient tensor) data, by implementing the following steps: - transformation of raw measurements of each pair of accelerometers into common and differential accelerations - calibration of the common and differential accelerations - application of the post-facto algorithm to rectify the phase of the accelerations and to estimate the GOCE angular velocity and attitude - computation of the Level-1b gravity gradient tensor from calibrated accelerations and estimated angular velocity in different reference frames (orbital, inertial, earth-fixed); computation of the spectral density of the error of the tensor diagonal components (measured gravity gradient minus input gravity gradient) in order to verify the requirement on the error of gravity gradient of 4 mE/sqrt(Hz) within the gradiometer measurement bandwidth (5 to 100 mHz); computation of the spectral density of the tensor trace in order to verify the requirement of 4 sqrt(3) mE/sqrt(Hz) within the measurement bandwidth - processing of GPS observations for orbit reconstruction within the required 10m accuracy and for gradiometer measurement geolocation. The current version of the end-to-end simulator, essentially focusing on the gradiometer payload, is undergoing detailed testing based on a time span of 10 days of simulated flight. This testing phase, ending in January 2003, will verify the current implementation and conclude the assessment of numerical stability and precision. Following that, the exercise will be repeated on a longer-duration simulated flight and the lesson learnt so far will be exploited to further improve the simulator's fidelity. The paper will describe the simulator's current status and will illustrate its capabilities for supporting the assessment of the quality of the scientific products resulting from the current spacecraft and payload design.
NASA Astrophysics Data System (ADS)
Braitenberg, Carla; Mariani, Patrizia
2015-04-01
The GOCE gravity field is globally homogeneous at the resolution of about 80km or better allowing for the first time to analyze tectonic structures at continental scale. Geologic correlation studies based on age determination and mineral composition of rock samples propose to continue the tectonic lineaments across continents to the pre-breakup position. Tectonic events which induce density changes, as metamorphic events and magmatic events, should then show up in the gravity field. Therefore gravity can be used as a globally available supportive tool for interpolation of isolated samples. Applying geodynamic plate reconstructions to the GOCE gravity field places today's observed field at the pre-breakup position. In order to test the possible deep control of the crustal features, the same reconstruction is applied to the seismic velocity models, and a joint gravity-velocity analysis is performed. The geophysical fields allow to control the likeliness of the hypothesized continuation of lineations based on sparse surface outcrops. Total absence of a signal, makes the cross-continental continuation of the lineament improbable, as continental-wide lineaments are controlled by rheologic and compositional differences of lithospheric mantle. It is found that the deep lithospheric roots as those found below cratons control the position of the positive gravity values. The explanation is that the deep lithospheric roots focus asthenospheric upwelling outboard of the root protecting the overlying craton from magmatic intrusions. The study is carried out over the African and South American continents. The background for the study can be found in the following publications where the techniques which have been used are described: Braitenberg, C., Mariani, P. and De Min, A. (2013). The European Alps and nearby orogenic belts sensed by GOCE, Boll. Bollettino di Geofisica Teorica ed Applicata, 54(4), 321-334. doi:10.4430/bgta0105 Braitenberg, C. and Mariani, P. (2015). Geological implications from complete Gondwana GOCE-products reconstructions and link to lithospheric roots. Proceedings of 5th International GOCE User Workshop, 25 - 28 November 2014. Braitenberg, C. (2015). Exploration of tectonic structures with GOCE in Africa and across-continents. Int. J.Appl. Earth Observ. Geoinf. 35, 88-95. http://dx.doi.org/10.1016/j.jag.2014.01.013 Braitenberg, C. (2015). A grip on geological units with GOCE, IAG Symp. 141, in press.
Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results
NASA Astrophysics Data System (ADS)
Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef
2017-04-01
The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.
Torus Approach in Gravity Field Determination from Simulated GOCE Gravity Gradients
NASA Astrophysics Data System (ADS)
Liu, Huanling; Wen, Hanjiang; Xu, Xinyu; Zhu, Guangbin
2016-08-01
In Torus approach, observations are projected to the nominal orbits with constant radius and inclination, lumped coefficients provides a linear relationship between observations and spherical harmonic coefficients. Based on the relationship, two-dimensional FFT and block-diagonal least-squares adjustment are used to recover Earth's gravity field model. The Earth's gravity field model complete to degree and order 200 is recovered using simulated satellite gravity gradients on a torus grid, and the degree median error is smaller than 10-18, which shows the effectiveness of Torus approach. EGM2008 is employed as a reference model and the gravity field model is resolved using the simulated observations without noise given on GOCE orbits of 61 days. The error from reduction and interpolation can be mitigated by iterations. Due to polar gap, the precision of low-order coefficients is lower. Without considering these coefficients the maximum geoid degree error and cumulative error are 0.022mm and 0.099mm, respectively. The Earth's gravity field model is also recovered from simulated observations with white noise 5mE/Hz1/2, which is compared to that from direct method. In conclusion, it is demonstrated that Torus approach is a valid method for processing massive amount of GOCE gravity gradients.
NASA Astrophysics Data System (ADS)
Alothman, Abdulaziz; Elsaka, Basem
2015-03-01
The free air gravity anomalies over Saudi Arabia (KSA) has been estimated from the final releases of GOCE-based global geopotential models (GGMs) compared with the terrestrial gravity anomalies of 3554 sites. Two GGMs; EGM08 and Eigen-6C3 have been applied. The free-air anomalies from GOCE-based, ΔgGGM, have been calculated over the 3554 stations in the medium and short spectrum of gravity wavelength of d/o 100, …, 250 (with 10 step). The short spectrum has been compensated once from d/o 101, …, 251 to 2190 and 1949 using EGM08 and Eigen-6C3 (i.e. ΔgGGM), respectively. The very short component was determined using residual terrain modelling approach. Our findings show firstly that the EGM08 is more reliable than Eigen-6C3. Second, the GOCE-based GGMs provide similar results within the spectral wavelength band from d/o 100 to d/o 180. Beyond d/o 180 till d/o 250, we found that GOCE-based TIM model releases provide substantial improvements within the spectral band from d/o 220 to d/o 250 with respect to the DIR releases. Third, the TIM_r5 model provides the least standard deviations (st. dev.) in terms of gravity anomalies.
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi; Hussain, Matloob
2016-10-01
In this research, a modified form of Vening Meinesz-Moritz (VMM) theory of isostasy for the second-order radial derivative of gravitational potential, measured from the Gravity field and steady-state Ocean Circulation Explorer (GOCE), is developed for local Moho depth recovery. An integral equation is organised for inverting the GOCE data to compute a Moho model in combination with topographic/bathymetric heights of SRTM30, sediment and consolidated crystalline basement and the laterally-varying density contrast model of CRUST1.0. A Moho model from EGM2008 to degree and order 180 is also computed based on the same principle for the purpose of comparison. In addition, we compare both of them with the 3 available seismic Moho models; two global and one regional over the Indo-Pak region. Numerical results show that our GOCE-based Moho model is closer to the all seismic models than that of EGM2008. The model is closest to the regional one with a standard deviation of 5.5 km and a root mean squares error of 7.8 km, which is 2.3 km smaller than the corresponding one based on EGM2008.
Adolescents' Computer Mediated Learning and Influences on Inter-Personal Relationships
ERIC Educational Resources Information Center
Miloseva, Lence; Page, Tom; Lehtonen, Miika; Marelja, Jozefina; Thorsteinsson, Gisli
2010-01-01
This study reports the findings of the several projects initiated at the Faculty of Education, Goce Delcev University, Stip, to investigate the motivation skills, but is uniquely specific to as inter, personal relationships and resources that influence the learner's participation in the teaching/learning process in the context of online-learning…
Evolution of the Power Processing Units Architecture for Electric Propulsion at CRISA
NASA Astrophysics Data System (ADS)
Palencia, J.; de la Cruz, F.; Wallace, N.
2008-09-01
Since 2002, the team formed by EADS Astrium CRISA, Astrium GmbH Friedrichshafen, and QinetiQ has participated in several flight programs where the Electric Propulsion based on Kaufman type Ion Thrusters is the baseline conceptOn 2002, CRISA won the contract for the development of the Ion Propulsion Control Unit (IPCU) for GOCE. This unit together with the T5 thruster by QinetiQ provides near perfect atmospheric drag compensation offering thrust levels in the range of 1 to 20mN.By the end of 2003, CRISA started the adaptation of the IPCU concept to the QinetiQ T6 Ion Thruster for the Alphabus program.This paper shows how the Power Processing Unit design evolved in time including the current developments.
Geoid Determination Using GOCE-Based Models in Turkey
NASA Astrophysics Data System (ADS)
Serkan Işık, Mustafa; Erol, Bihter
2016-04-01
The maintenance of the vertical datum in tectonically active regions such as Turkey become more of an issue. The distortions in the vertical datum due to geodynamic phenomena necessitate the realization of geoid based vertical datum. The height modernization studies for transition to a "geoid based vertical datum definition" providing practical use of GNSS technologies to obtain orthometric heights in Turkey has accelerated rapidly in recent years and hence in the content of these efforts on-going projects contribute to improvement of quality and quantity of terrestrial gravity dataset as well as selection of the optimal computation algorithm to reach a precise geoid model in the territory. In this manner the assessment of the different methodologies with varying input parameters and referred models is obviously essential to in order to clarify the advantages of the algorithms in terms of providing an optimal combination of different data sets in regional geoid modeling. The performance of recently published GOCE-GRACE gravity field models show significant improvements in the medium frequency. This study investigates the contribution of the recently released Geopotential models with the contribution of GOCE and GRACE missions to the gravimetric geoid modeling specifically from Least squares modification of Stokes' (LSMS) formula point of view in Turkey territory. The algorithm developed by Royal Institute of Technology (KTH) that adopt the least squares modification of Stokes' kernel in order for providing an optimum combination of spherical harmonic expansion model and terrestrial gravity data and hence claims to optimize the drawbacks, may stem from the handicaps (such as low accuracy, sparse distribution etc.) of the terrestrial gravity data in the results. The additive corrective terms in order to account for downward continuation effect, atmospheric effect and ellipsoidal effect are proposed as the superiorities of this algorithm comparing to the conventional Remove-Restore method. The assessments of the geoid models are done at the homogeneously distributed thirty National Network points in Turkey. The positional accuracy of GNSS/Levelling points (belong the Turkey National Fundamental GNSS Network-TUTGA) are reported as ±1.0 cm in horizontal and ±1.5 cm in vertical components. The orthometric heights of these benchmarks are computed via adjustment of the Turkish National Vertical Control Network (TUDKA). All releases of direct (DIR), time-wise (TIM), space-wise (SPW) and Gravity Observation Combination (GOCO) models are evaluated using spectral enhancement method (SEM). DIR R5, TIM R5 and GOCO05S models, which show the best agreements with the GNSS/Levelling data, are included within the study and their performance are compared with EGM2008 model. In conclusion the GOCE gravity field models performs in the level very close to EGM2008 performance, when the same truncation degree of models are considered. The overall results reveal that the gravimetric geoid model which is computed using DIR R5 model provides the best performance having ±24.1 cm (without de-trending), though there is no significant improvement related with the contribution of GOCE gravity field models to the regional geoid determination based on LSMS approach in Turkey territory.
NASA Astrophysics Data System (ADS)
Brockmann, J. M.; Schuh, W.-D.
2011-07-01
The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.
NASA Astrophysics Data System (ADS)
Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan
2012-10-01
The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.
Contribution of the GOCE gradiometer components to regional gravity solutions
NASA Astrophysics Data System (ADS)
Naeimi, Majid; Bouman, Johannes
2017-05-01
The contribution of the GOCE gravity gradients to regional gravity field solutions is investigated in this study. We employ radial basis functions to recover the gravity field on regional scales over Amazon and Himalayas as our test regions. In the first step, four individual solutions based on the more accurate gravity gradient components Txx, Tyy, Tzz and Txz are derived. The Tzz component gives better solution than the other single-component solutions despite the less accuracy of Tzz compared to Txx and Tyy. Furthermore, we determine five more solutions based on several selected combinations of the gravity gradient components including a combined solution using the four gradient components. The Tzz and Tyy components are shown to be the main contributors in all combined solutions whereas the Txz adds the least value to the regional gravity solutions. We also investigate the contribution of the regularization term. We show that the contribution of the regularization significantly decreases as more gravity gradients are included. For the solution using all gravity gradients, regularization term contributes to about 5 per cent of the total solution. Finally, we demonstrate that in our test areas, regional gravity modelling based on GOCE data provide more reliable gravity signal in medium wavelengths as compared to pre-GOCE global gravity field models such as the EGM2008.
Acceleration Noise Considerations for Drag-free Satellite Geodesy Missions
NASA Astrophysics Data System (ADS)
Hong, S. H.; Conklin, J. W.
2016-12-01
The GRACE mission, which launched in 2002, opened a new era of satellite geodesy by providing monthly mass variation solutions with spatial resolution of less than 200 km. GRACE proved the usefulness of a low-low satellite-to-satellite tracking formation. Analysis of the GRACE data showed that the K-Band ranging system, which is used to measure the range between the two satellites, is the limiting factor for the precision of the solution. Consequently, the GRACE-FO mission, schedule for launch in 2017, will continue the work of GRACE, but will also test a new, higher precision laser ranging interferometer compared with the K-Band ranging system. Beyond GRACE-FO, drag-free systems are being considered for satellite geodesy missions. GOCE tested a drag-free attitude control system with a gravity gradiometer and showed improvements in the acceleration noise compensation compared to the electrostatic accelerometers used in GRACE. However, a full drag-free control system with a gravitational reference sensor has not yet been applied to satellite geodesy missions. More recently, this type of drag-free system was used in LISA Pathfinder, launched in 2016, with an acceleration noise performance two orders of magnitude better than that of GOCE. We explore the effects of drag-free performance in satellite geodesy missions similar to GRACE-FO by applying three different residual acceleration noises from actual space missions: GRACE, GOCE and LISA Pathfinder. Our solutions are limited to degree 60 spherical harmonic coefficients with biweekly time resolution. Our analysis shows that a drag-free system with acceleration noise performance comparable to GOCE and LISA-Pathfinder would greatly improve the accuracy of gravity solutions. In addition to these results, we also present the covariance shaping process used in the estimation. In the future, we plan to use actual acceleration noise data measured using the UF torsion pendulum. This apparatus is a ground facility at University of Florida used to test the performance of precision inertial sensors. We also plan to evaluate the importance of acceleration noise when a second inclined pair of satellites is included in the analysis, following the work of Weise in 2012, which showed that two satellite pairs decreased aliasing errors.
Akerdi, Abdollah Gholami; Bahrami, S Hajir; Arami, Mokhtar; Pajootan, Elmira
2016-09-01
Textile industry consumes remarkable amounts of water during various operations. A significant portion of the water discharge to environment is in the form of colored contaminant. The present research reports the photocatalytic degradation of anionic dye effluent using immobilized TiO2 nanoparticle on graphene oxide (GO) fabricated carbon electrodes. Acid Red 14 (AR 14) was used as model compound. Graphene oxide nanosheets were synthesized from graphite powder using modified Hummer's method. The nanosheets were characterized with field emission scanning electron microscope (FESEM) images, X-ray diffraction (XRD) and FTIR spectrum. The GO nanoparticles were deposited on carbon electrode (GO-CE) by electrochemical deposition (ECD) method and used as catalyst bed. TiO2 nanoparticles were fixed on the bed (GO-CE- TiO2) with thermal process. Photocatalytic processes were carried out using a 500 ml solution containing dye in batch mode. Each photocatalytic treatment were carried out for 120 min. Effect of dye concentration (mg/L), pH of solution, time (min) and TiO2 content (g/L) on the photocatalytic decolorization was investigated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Space-Wise approach for airborne gravity data modelling
NASA Astrophysics Data System (ADS)
Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.
2017-05-01
Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data retrieving the gravitational signal with a predicted accuracy of about 0.4 mGal.
NASA Astrophysics Data System (ADS)
Erol, Serdar; Serkan Isık, Mustafa; Erol, Bihter
2016-04-01
The recent Earth gravity field satellite missions data lead significant improvement in Global Geopotential Models in terms of both accuracy and resolution. However the improvement in accuracy is not the same everywhere in the Earth and therefore quantifying the level of improvement locally is necessary using the independent data. The validations of the level-3 products from the gravity field satellite missions, independently from the estimation procedures of these products, are possible using various arbitrary data sets, as such the terrestrial gravity observations, astrogeodetic vertical deflections, GPS/leveling data, the stationary sea surface topography. Quantifying the quality of the gravity field functionals via recent products has significant importance for determination of the regional geoid modeling, base on the satellite and terrestrial data fusion with an optimal algorithm, beside the statistical reporting the improvement rates depending on spatial location. In the validations, the errors and the systematic differences between the data and varying spectral content of the compared signals should be considered in order to have comparable results. In this manner this study compares the performance of Wavelet decomposition and spectral enhancement techniques in validation of the GOCE/GRACE based Earth gravity field models using GPS/leveling and terrestrial gravity data in Turkey. The terrestrial validation data are filtered using Wavelet decomposition technique and the numerical results from varying levels of decomposition are compared with the results which are derived using the spectral enhancement approach with contribution of an ultra-high resolution Earth gravity field model. The tests include the GO-DIR-R5, GO-TIM-R5, GOCO05S, EIGEN-6C4 and EGM2008 global models. The conclusion discuss the superiority and drawbacks of both concepts as well as reporting the performance of tested gravity field models with an estimate of their contribution to modeling the geoid in Turkish territory.
NASA Astrophysics Data System (ADS)
Doornbos, E.; Ridley, A. J.; Cnossen, I.; Aruliah, A. L.; Foerster, M.
2015-12-01
Thermospheric neutral winds play an important part in the coupled thermosphere-ionosphere system at high latitudes. Neutral wind speeds have been derived from the CHAMP and GOCE satellites, which carried precise accelerometers in low Earth orbits. Due to the need to simultaneously determine thermosphere neutral density from the accelerometer in-track measurements, only information on the wind component in the cross-track direction, perpendicular to the flight direction can be derived. However, contrary to ground-based Fabry-Perot interferometer and scanning Doppler imager observations of the thermosphere wind, these satellite-based measurements provide equally distributed coverage over both hemispheres. The sampling of seasonal and local time variations depend on the precession rate of the satellite's orbital plane, with CHAMP covering about 28 cycles of 24-hour local solar time coverage, during its 10 year mission (2000-2010), while the near sun-synchronous orbit of GOCE resulted in a much more limited local time coverage ranging from 6:20 to 8:00 (am and pm), during a science mission duration of 4 years (2009-2013). For this study, the wind data from both CHAMP and GOCE have been analysed in terms of seasonal variations and geographic and geomagnetic local solar time and latitude coordinates, in order to make statistical comparisons for both the Northern and Southern polar areas. The wind data from both satellites were studied independently and in combination, in order to investigate how the strengths and weaknesses of the instruments and orbit parameters of these missions affect investigations of interhemispheric differences. Finally, the data have been compared with results from coupled ionosphere-thermosphere models and from ground-based FPI and SDI measurements.
NASA Astrophysics Data System (ADS)
Alothman, A. O.; Elsaka, B.
2015-12-01
A new gravimetric quasi-geoid, known as KSAG0, has been developed recently by Remove-Compute-Restore techniques (RCR), provided by the GRAVSOFT software, using gravimetric free air anomalies. The terrestrial gravity data used in this computations are: 1145 gravity field anomalies observed by ARAMCO (Saudi Arabian Oil Company) and 2470 Gravity measurements from BGI (Bureau Gravimétrique International). The computations were carried out implementing the least squares collocation method through the RCR techniques. The KSAG01 is based on merging in addition to the terrestrial gravity observations, GOCE satellite model (Eigen-6C4) and global gravity model (EGM2008) have been utilized in the computations. The long, medium and short wavelength spectrum of the height anomalies were compensated from Eigen-6C4 and EGM2008 geoid models truncated up to Degree and order (d/o) up to 2190. KSAG01 geoid covers 100 per cent of the kingdom, with geoid heights range from - 37.513 m in the southeast to 23.183 m in the northwest of the country. The accuracy of the geoid is governed by the accuracy, distribution, and spacing of the observations. The standard deviation of the predicted geoid heights is 0.115 m, with maximum errors of about 0.612 m. The RMS of geoid noise ranges from 0.019 m to 0.04 m. Comparison of the predicted gravimetric geoid with EGM, GOCE, and GPS/Levelling geoids, reveals a considerable improvements of the quasi-geoid heights over Saudi Arabia.
NASA Astrophysics Data System (ADS)
Alothman, Abdulaziz; Elsaka, Basem
2016-04-01
A new gravimetric quasi-geoid, known as KSAG0, has been developed recently by Remove-Compute-Restore techniques (RCR), provided by the GRAVSOFT software, using gravimetric free air anomalies. The terrestrial gravity data used in this computations are: 1145 gravity field anomalies observed by ARAMCO (Saudi Arabian Oil Company) and 2470 Gravity measurements from BGI (Bureau Gravimétrique International). The computations were carried out implementing the least squares collocation method through the RCR techniques. The KSAG01 is based on merging in addition to the terrestrial gravity observations, GOCE satellite model (Eigen-6C4) and global gravity model (EGM2008) have been utilized in the computations. The long, medium and short wavelength spectrum of the height anomalies were compensated from Eigen-6C4 and EGM2008 geoid models truncated up to Degree and order (d/o) up to 2190. KSAG01 geoid covers 100 per cent of the kingdom, with geoid heights range from - 37.513 m in the southeast to 23.183 m in the northwest of the country. The accuracy of the geoid is governed by the accuracy, distribution, and spacing of the observations. The standard deviation of the predicted geoid heights is 0.115 m, with maximum errors of about 0.612 m. The RMS of geoid noise ranges from 0.019 m to 0.04 m. Comparison of the predicted gravimetric geoid with EGM, GOCE, and GPS/Levelling geoids, reveals a considerable improvements of the quasi-geoid heights over Saudi Arabia.
Modelling airborne gravity data by means of adapted Space-Wise approach
NASA Astrophysics Data System (ADS)
Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea
2017-04-01
Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.
GOCE observations for Mineral exploration in Africa and across continents
NASA Astrophysics Data System (ADS)
Braitenberg, Carla
2014-05-01
The gravity anomaly field over the whole Earth obtained by the GOCE satellite is a revolutionary tool to reveal geologic information on a continental scale for the large areas where conventional gravity measurements have yet to be made (e.g. Alvarez et al., 2012). It is, however, necessary to isolate the near-surface geologic signal from the contributions of thickness variations in the crust and lithosphere and the isostatic compensation of surface relief (e.g. Mariani et al., 2013) . Here Africa is studied with particular emphasis on selected geological features which are expected to appear as density inhomogeneities. These include cratons and fold belts in the Precambrian basement, the overlying sedimentary basins and magmatism, as well as the continental margins. Regression analysis between gravity and topography shows coefficients that are consistently positive for the free air gravity anomaly and negative for the Bouguer gravity anomaly (Braitenberg et al., 2013; 2014). The error and scatter on the regression is smallest in oceanic areas, where it is a possible tool for identifying changes in crustal type. The regression analysis allows the large gradient in the Bouguer anomaly signal across continental margins to be removed. After subtracting the predicted effect of known topography from the original Bouguer anomaly field, the residual field shows a continent-wide pattern of anomalies that can be attributed to regional geological structures. A few of these are highlighted, such as those representing Karoo magmatism, the Kibalian foldbelt, the Zimbabwe Craton, the Cameroon and Tibesti volcanic deposits, the Benue Trough and the Luangwa Rift. A reconstruction of the pre-break up position of Africa, South and North America is made for the residual GOCE gravity field obtaining today's gravity field of the plates forming West Gondwana. The reconstruction allows the positive and negative anomalies to be compared across the continental fragments, and so helps identify common geologic units that extend across both the now-separate continents. Tracing the geologic units is important for mineral exploration, which is demonstrated with the analysis of correlations of the gravity signal with selected classes of mineral occurrences, for instance those associated to Greenstone belts. Alvarez, O., Gimenez M., Braitenberg C., Folguera, A. (2012) GOCE Satellite derived Gravity and Gravity gradient corrected for topographic effect in the South Central Andes Region. Geophysical Journal International, 190, 941-959, doi: 10.1111/j.1365-246X.2012.05556.x Braitenberg C., Mariani P., De Min A. (2013) The European Alps and nearby orogenic belts sensed by GOCE, Boll. Bollettino di Geofisica Teorica ed Applicata, doi:10.4430/bgta0105 Braitenberg C. (2014) Exploration of tectonic structures with GOCE in Africa and across-continents, J.of Applied Earth Observation and Geoinformation (in Review). Mariani P., Braitenberg C., Ussami N. (2013). Explaining the thick crust in Parana' basin, Brazil, with satellite GOCE-gravity observations. Journal of South American Earth Sciences, 45, 209-223, doi:10.1016/j.jsames.2013.03.008.
Thermosphere Response to Geomagnetic Variability during Solar Minimum Conditions
NASA Astrophysics Data System (ADS)
Forbes, Jeffrey; Gasperini, Federico; Zhang, Xiaoli; Doornbos, Eelco; Bruinsma, Sean; Haeusler, Kathrin; Hagan, Maura
2015-04-01
The response of thermosphere mass density to variable geomagnetic activity at solar minimum is revealed as a function of height utilizing accelerometer data from GRACE near 480 km, CHAMP near 320 km, and GOCE near 260 km during the period October-December, 2009. The GOCE data at 260 km, and to some degree the CHAMP measurements at 320 km, reveal the interesting feature that the response maximum occurs at low latitudes, rather than at high latitudes where the geomagnetic energy input is presumed to be deposited. The latitude distribution of the response is opposite to what one might expect based on thermal expansion and/or increase in mean molecular weight due to vertical transport of N2 at high latitudes. We speculate that what is observed reflects the consequences of an equatorward meridional circulation with downward motion and compressional heating at low latitudes. A numerical simulation using the National Center for Atmospheric Research (NCAR) Thermosphere-Ionosphere-Mesosphere Electrodynamics General Circulation Model (TIME-GCM) is used to assist with this diagnosis. At 480 km GRACE reveals maximum density responses at high southern (winter) latitudes, consistent with recent interpretations in terms of compositional versus temperature effects near the oxygen-helium transition altitude during low solar activity.
Daily variation of diurnal thermal tides from CHAMP and GOCE accelerometer measurements
NASA Astrophysics Data System (ADS)
Gasperini, Federico; Doornbos, Eelco; Forbes, Jeffrey M.; Bruinsma, Sean; Haeusler, Kathrin; Hagan, Maura
Daily migrating and non-migrating diurnal tides in exospheric temperature derived from simultaneous accelerometer measurements on CHAMP (near 300 km) and GOCE (near 260 km) are studied for the intervals November-December 2009 and March-April 2010. Neutral densities are converted to exospheric temperatures using the NRLMSISe00 empirical model and by iterating on a convenient parameter (e.g. F10.7 solar flux). This methodology is validated using NCAR TIME-GCM simulations for this period as a mock data set, and results are compared to an approach where differences between ascending and descending orbital measurements are used to estimate diurnal tides for CHAMP and GOCE separately. The tidal components analyzed are the westward-propagating components with zonal wave numbers s=1 and s=2 (DW1 and DW2) and the eastward-propagating components with s=-2 and s=-3 (DE2 and DE3). Spectral analyses are used to reveal potential planetary wave modulations of the daily tidal amplitudes.
GOCE: The first seismometer in orbit around the Earth
NASA Astrophysics Data System (ADS)
Garcia, Raphael F.; Bruinsma, Sean; Lognonné, Philippe; Doornbos, Eelco; Cachoux, Florian
2013-03-01
The first in situ sounding of a post-seismic infrasound wavefront is presented, using data from the GOCE mission. The atmospheric infrasounds following the great Tohoku earthquake (on 11 March 2011) induce variations of air density and vertical acceleration of the GOCE platform. These signals are detected at two positions along the GOCE orbit corresponding to a crossing and a doubling of the infrasonic wavefront created by seismic surface waves. Perturbations up to 11% of air density and 1.35 × 10 - 7 m/s2 of vertical acceleration are observed and modeled with two different solid-atmosphere coupling codes. These perturbations are a due to acoustic waves creating vertical velocities up to 130 m/s. Amplitudes and arrival times of these perturbations are reproduced respectively within a factor 2, and within a 60 s time window. Waveforms present a good agreement with observed data. The vertical acceleration to air density perturbation ratio is higher for these acoustic waves than for gravity waves. Combining these two pieces of information offers a new way to distinguish between these two wave types. This new type of data is a benchmark for the models of solid-atmosphere coupling. Amplitude and frequency content constrain the infrasound attenuation related to atmosphere viscosity and thermal conductivity. Observed time shifts between data and synthetics are ascribed to lateral variations of the seismic and atmospheric sound velocities and to the influence of atmospheric winds. These effects should be included in future modeling. This validation of our modeling tools allows to specify more precisely future observation projects.
Ground Based Investigation of Electrostatic Accelerometer in HUST
NASA Astrophysics Data System (ADS)
Bai, Y.; Zhou, Z.
2013-12-01
High-precision electrostatic accelerometers with six degrees of freedom (DOF) acceleration measurement were successfully used in CHAMP, GRACE and GOCE missions which to measure the Earth's gravity field. In our group, space inertial sensor based on the capacitance transducer and electrostatic control technique has been investigated for test of equivalence principle (TEPO), searching non-Newtonian force in micrometer range, and satellite Earth's field recovery. The significant techniques of capacitive position sensor with the noise level at 2×10-7pF/Hz1/2 and the μV/Hz1/2 level electrostatic actuator are carried out and all the six servo loop controls by using a discrete PID algorithm are realized in a FPGA device. For testing on ground, in order to compensate one g earth's gravity, the fiber torsion pendulum facility is adopt to measure the parameters of the electrostatic controlled inertial sensor such as the resolution, and the electrostatic stiffness, the cross couple between different DOFs. A short distance and a simple double capsule equipment the valid duration about 0.5 second is set up in our lab for the free fall tests of the engineering model which can directly verify the function of six DOF control. Meanwhile, high voltage suspension method is also realized and preliminary results show that the horizontal axis of acceleration noise is about 10-8m/s2/Hz1/2 level which limited mainly by the seismic noise. Reference: [1] Fen Gao, Ze-Bing Zhou, Jun Luo, Feasibility for Testing the Equivalence Principle with Optical Readout in Space, Chin. Phys. Lett. 28(8) (2011) 080401. [2] Z. Zhu, Z. B. Zhou, L. Cai, Y. Z. Bai, J. Luo, Electrostatic gravity gradiometer design for the advanced GOCE mission, Adv. Sp. Res. 51 (2013) 2269-2276. [3] Z B Zhou, L Liu, H B Tu, Y Z Bai, J Luo, Seismic noise limit for ground-based performance measurements of an inertial sensor using a torsion balance, Class. Quantum Grav. 27 (2010) 175012. [4] H B Tu, Y Z Bai, Z B Zhou, L Liu, L Cai, and J Luo, Performance measurements of an inertial sensor with a two-stage controlled torsion pendulum, Class Quantum. Grav. 27 (2010) 205016.
Global Gravity Field Determination by Combination of terrestrial and Satellite Gravity Data
NASA Astrophysics Data System (ADS)
Fecher, T.; Pail, R.; Gruber, T.
2011-12-01
A multitude of impressive results document the success of the satellite gravity field mission GOCE with a wide field of applications in geodesy, geophysics and oceanography. The high performance of GOCE gravity field models can be further improved by combination with GRACE data, which is contributing the long wavelength signal content of the gravity field with very high accuracy. An example for such a consistent combination of satellite gravity data are the satellite-only models GOCO01S and GOCO02S. However, only the further combination with terrestrial and altimetric gravity data enables to expand gravity field models up to very high spherical harmonic degrees and thus to achieve a spatial resolution down to 20-30 km. First numerical studies for high-resolution global gravity field models combining GOCE, GRACE and terrestrial/altimetric data on basis of the DTU10 model have already been presented. Computations up to degree/order 600 based on full normal equations systems to preserve the full variance-covariance information, which results mainly from different weights of individual terrestrial/altimetric data sets, have been successfully performed. We could show that such large normal equations systems (degree/order 600 corresponds to a memory demand of almost 1TByte), representing an immense computational challenge as computation time and memory requirements put high demand on computational resources, can be handled. The DTU10 model includes gravity anomalies computed from the global model EGM08 in continental areas. Therefore, the main focus of this presentation lies on the computation of high-resolution combined gravity field models based on real terrestrial gravity anomaly data sets. This is a challenge due to the inconsistency of these data sets, including also systematic error components, but a further step to a real independent gravity field model. This contribution will present our recent developments and progress by using independent data sets at certain land areas, which are combined with DTU10 in the ocean areas, as well as satellite gravity data. Investigations have been made concerning the preparation and optimum weighting of the different data sources. The results, which should be a major step towards a GOCO-C model, will be validated using external gravity field data and by applying different validation methods.
NASA Astrophysics Data System (ADS)
Melachroinos, S. A.; Biancale, R.; Menard, Y.; Sarrailh, M.
2008-12-01
The Drake campaign which took place from Jan 14, 2006 - 08 Feb, 2006 has been a very successful mission in collecting a wide range of GPS and marine gravity data all along JASON altimetry ground track n° 104. The same campaign will be repeated in 2009 along 028 and 104 JASON-2 ground track. The Drake Passage (DP) chokepoint is not only well suited geographically, as the Antarctic Circumpolar Current (ACC) is constricted to its narrowest extent of 700 km, but observations and models suggest that dynamical balances are particular effective in this area. Furthermore the space geodesy observations and their products provided from several altimetry missions (currently operating ENVISAT, JASON 1 and 2, GFO, ERS and other plannified for the future such as Altika, SWOT) require the cross comparison with independent geodetic techniques at the DP. The current experiment comprises a kinematic GPS and marine gravimetry Cal/Val geodetic approach and it aims to : validate with respect to altimetry data and surface models such a kinematic high frequency GPS technique for measuring sea state and sea surface height (SSH), compare the GPS SSH profiles with altimetry mean dynamic topography (MDT) and mean sea surface (MSS) models, give recommendations for future "offshore" Cal/Val activities on the ground tracks of altimeter satellites such as JASON-2, GFO, Altika using the GNSS technology etc. The GPS observations are collected from GPS antennas installed on a wave-rider buoy , aboard the R/V "Polarstern" and from continuous geodetic reference stations in the proximity. We also analyse problems related to the ship's attitude variations in roll, pitch and yaw and a way to correct them. We also give emphasis on the impact of the ship's acceleration profiles on the so called "squat effect" and ways to deal with it. The project will in particular benefit the GOCE mission by proposing to integrate GOCE in the ocean circulation study and validate GOCE products with our independent geodetic data set. The high rate GPS SSH solutions are derived using two different GPS kinematic software, GINS (CNES) and TRACK (MIT).
NASA Astrophysics Data System (ADS)
Papanikolaou, T. D.; Papadopoulos, N.
2015-06-01
The present study aims at the validation of global gravity field models through numerical investigation in gravity field functionals based on spherical harmonic synthesis of the geopotential models and the analysis of terrestrial data. We examine gravity models produced according to the latest approaches for gravity field recovery based on the principles of the Gravity field and steadystate Ocean Circulation Explorer (GOCE) and Gravity Recovery And Climate Experiment (GRACE) satellite missions. Furthermore, we evaluate the overall spectrum of the ultra-high degree combined gravity models EGM2008 and EIGEN-6C3stat. The terrestrial data consist of gravity and collocated GPS/levelling data in the overall Hellenic region. The software presented here implements the algorithm of spherical harmonic synthesis in a degree-wise cumulative sense. This approach may quantify the bandlimited performance of the individual models by monitoring the degree-wise computed functionals against the terrestrial data. The degree-wise analysis performed yields insight in the short-wavelengths of the Earth gravity field as these are expressed by the high degree harmonics.
Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Vipiana, Francesca
2017-04-01
Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.
Cold Atom Interferometers Used In Space (CAIUS) for Measuring the Earth's Gravity Field
NASA Astrophysics Data System (ADS)
Carraz, O.; Luca, M.; Siemes, C.; Haagmans, R.; Silvestrin, P.
2016-12-01
In the past decades, it has been shown that atomic quantum sensors are a newly emerging technology that can be used for measuring the Earth's gravity field. There are two ways of making use of that technology: One is a gravity gradiometer concept and the other is in a low-low satellite-to-satellite ranging concept. Whereas classical accelerometers typically suffer from high noise at low frequencies, Cold Atom Interferometers are highly accurate over the entire frequency range. We recently proposed a concept using cold atom interferometers for measuring all diagonal elements of the gravity gradient tensor and the full spacecraft angular velocity in order to achieve better performance than the GOCE gradiometer over a larger part of the spectrum, with the ultimate goals of determining the fine structures in the gravity field better than today. This concept relies on a high common mode rejection, which relaxes the drag free control compare to GOCE mission, and benefits from a long interaction time with the free falling clouds of atoms due to the micro gravity environment in space as opposed to the 1-g environment on-ground. Other concept is also being studied in the frame of NGGM, which relies on the hybridization between quantum and classical techniques to improve the performance of accelerometers. This could be achieved as it is realized in frequency measurements where quartz oscillators are phase locked on atomic or optical clocks. This technique could correct the spectrally colored noise of the electrostatic accelerometers in the lower frequencies. In both cases, estimation of the Earth gravity field model from the instruments has to be evaluated taking into account different system parameters such as attitude control, altitude of the satellite, time duration of the mission, etc. Miniaturization, lower consumptions and upgrading Technical Readiness Level are the key engineering challenges that have to be faced for these space quantum technologie.
NASA Astrophysics Data System (ADS)
Douch, Karim; Müller, Jürgen; Heinzel, Gerhard; Wu, Hu
2017-04-01
The successful GRACE mission and its far-reaching benefits have highlighted the interest to continue and extend the mapping of the Earth's time-variable gravitational field with follow-on missions and ideally a higher spatiotemporal resolution. Here, we would like to put forward satellite gravitational gradiometry as an alternative solution to satellite-to-satellite tracking for future missions. Besides the higher sensitivity to smaller scales compared to GRACE-like missions, a gradiometry mission would only require one satellite and would provide a direct estimation of a functional of the gravitational field. GOCE, the only gradiometry mission launched so far, was not sensitive enough to map the time-variable part of the gravity field. However, the unprecedented precision of the state-of-the-art optical metrology system on-board the LISA PATHFINDER satellite has opened the way to more performant space inertial sensors. We will therefore examine whether it is technically possible to go beyond GOCE performances and to quantify to what extent the time-variable gravitational field could be determined. First, we derive the requirements on the knowledge of the attitude and the position of the satellite and on the measured gradients in terms of sensitivity and calibration accuracy for a typical repeat low-orbit. We conclude in particular that a noise level smaller than 0.1 mE/√Hz- is required in the measurement bandwidth [5x10-4 ; 10-2]Hz so as to be sensitive to the time-variable gravity signal. We introduce then the design and characteristics of the new gradiometer concept and give an assessment of its noise budget. Contrary to the GOCE electrostatic gradiometer, the position of the test-mass in the accelerometer is measured here by laser interferometry rather than by a capacitive readout system, which improves the overall measurement chain. Finally, the first results of a performance analysis carried out thanks to an end-to-end simulator are discussed and compared to the previously defined requirements.
GOCE Re-Entry Predictions for the Italian Civil Protection Authorities
NASA Astrophysics Data System (ADS)
Pardini, Carmen; Anselmo, Luciano
2015-03-01
The uncommon nature of the GOCE reentry campaign, sharing an uncontrolled orbital decay with a finely controlled attitude along the atmospheric drag direction, made the reentry predictions for this satellite an interesting case study, especially because nobody was able to say a priori if and when the attitude control would have failed, leading to an unrestrained tumbling and a sudden variation of the orbital decay rate. As in previous cases, ISTI/CNR was in charge of reentry predictions for the Italian civil protection authorities, monitoring also the satellite decay in the frame of an international reentry campaign promoted by the Inter-Agency Space Debris Coordination Committee (IADC). Due to the peculiar nature of the GOCE reentry, the definition of reliable uncertainty windows was not easy, especially considering the critical use of this information for civil protection evaluations. However, after an initial period of test and analysis, reasonable and conservative criteria were elaborated and applied, with good and consistent results through the end of the reentry campaign. In the last three days of flight, reentries were simulated over Italy to obtain quite accurate ground tracks, debris swaths and air space crossing time windows associated with the critical passes over the national territory still included in the global uncertainty windows.
The Use of GOCE/GRACE Information in the Latest NGS xGeoid15 Model for the USA
NASA Astrophysics Data System (ADS)
Holmes, S. A.; Li, X.; Youngman, M.
2015-12-01
The U.S. National Geodetic Survey [NGS], through its Gravity for the Redefinition of the American Vertical Datum [GRAV-D] program, is flying airborne gravity surveys over the USA and its territories. By 2022, NGS intends that all orthometric heights in the USA will be determined in the field using a reliable national gravimetric geoid model to transform from geodetic heights obtained from GPS. Towards this end, all available airborne data has been incorporated into a new NGS experimental geoid model - xGEOID15. The xGEOID15 model is the second in a series of annual experimental geoid models that incorporates NGS GRAV-D airborne data. This series provides a useful benchmark for assessing and improving current techniques, to ultimately compute a geoid model that can support a national physical height system by 2022. Here, we focus on the combination of the latest GOCE/GRACE models with the terrestrial gravimetry (land/airborne) that was applied for xGeoid15. Comparisons against existing combination gravitational solutions, such as EGM2008 and EIGEN6C4, as well as recent geoid models, such as xGeoid14 and CGG2013, are interesting for what they reveal about the respective use of the GOCE/GRACE satgrav information.
NASA Astrophysics Data System (ADS)
Braitenberg, Carla; Mariani, Patrizia
2015-03-01
The GOCE gravity field is globally homogeneous at the resolution of about 80km or better allowing for the first time to analyze tectonic structures at continental scale. Geologic correlation studies propose to continue the tectonic lineaments across continents to the pre-breakup position. Tectonic events that induce density changes, as metamorphic events and magmatic events, should then show up in the gravity field. Applying geodynamic plate reconstructions to the GOCE gravity field places today’s observed field at the pre-breakup position. The same reconstruction can be applied to the seismic velocity models, to allow a joint gravity-velocity analysis. The geophysical fields allow to control the likeliness of the hypothesized continuation of lineations based on sparse surface outcrops. Total absence of a signal, makes the cross-continental continuation of the lineament improbable, as continental-wide lineaments are controlled by rheologic and compositional differences of lithospheric mantle. It is found that the deep lithospheric roots as those found below cratons control the position of the positive gravity values. The explanation is that the deep lithospheric roots focus asthenospheric upwelling outboard of the root protecting the overlying craton from magmatic intrusions. The study is carried out over the African and South American continents.
NASA Astrophysics Data System (ADS)
Lu, G.; Hagan, M. E.; Häusler, K.; Doornbos, E.; Bruinsma, S.; Anderson, B. J.; Korth, H.
2014-12-01
We present a case study of the 5 April 2010 geomagnetic storm using observations and numerical simulations. The event was driven by a fast-moving coronal mass ejection and despite being a moderate storm with a minimum Dst near -50 nT, the event exhibited elevated thermospheric density and surges of traveling atmospheric disturbances (TADs) more typically seen during major storms. The Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model (TIMEGCM) was used to assess how these features were generated and developed during the storm. The model simulations gave rise to TADs that were highly nonuniform with strong latitude and longitude/local time dependence. The TAD phase speeds ranged from 640 m/s to 780 m/s at 400 km and were ~5% lower at 300 km and approximately 10-15% lower at 200 km. In the lower thermosphere around 100 km, the TAD signatures were nearly unrecognizable due to much stronger influence of upward propagating atmospheric tides. The thermosphere simulation results were compared to observations available from the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE), CHAllenging Minisatellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites. Comparison with GOCE data shows that the TIMEGCM reproduced the cross-track winds over the polar region very well. The model-data comparison also revealed some differences, specifically, the simulations underestimated neutral mass density in the upper thermosphere above ~300 km and overestimated the storm recovery tome by 6 h. These discrepancies indicate that some heating or circulation dynamics and potentially cooling processes are not fully represented in the simulations, and also that updates to some parameterization schemes in the TIMEGCM are warranted.
NASA Astrophysics Data System (ADS)
Baur, Oliver; Weigelt, Matthias; Zehentner, Norbert; Mayer-Gürr, Torsten; Jäggi, Adrian
2014-05-01
In the last decade, temporal variations of the gravity field from GRACE observations have become one of the most ubiquitous and valuable sources of information for geophysical and environmental studies. In the context of global climate change, mass balance of the Arctic and Antarctic ice sheets gained particular attention. Because GRACE has outlived its predicted lifetime by several years already, it is very likely that a gap between GRACE and its successor GRACE follow-on (supposed to be launched in 2017, at the earliest) occurs. The Swarm mission - launched on November 22, 2013 - is the most promising candidate to bridge this potential gap, i.e., to directly acquire large-scale mass variation information on the Earth's surface in case of a gap between the present GRACE and the upcoming GRACE follow-on projects. Although the magnetometry mission Swarm has not been designed for gravity field purposes, its three satellites have the characteristics for such an endeavor: (i) low, near-circular and near-polar orbits, (ii) precise positioning with high-quality GNSS receivers, (iii) on-board accelerometers to measure the influence of non-gravitational forces. Hence, from an orbit analysis point of view the Swarm satellites are comparable to the CHAMP, GRACE and GOCE spacecraft. Indeed and as data analysis from CHAMP has been shown, the detection of annual signals and trends from orbit analysis is possible for long-wavelength features of the gravity field, although the accuracy associated with the inter-satellite GRACE measurements cannot be reached. We assess the capability of the (non-dedicated) mission Swarm for mass variation detection in a real-case environment (opposed to simulation studies). For this purpose, we "approximate" the Swarm scenario by the GRACE+CHAMP and GRACE+GOCE constellations. In a first step, kinematic orbits of the individual satellites are derived from GNSS observations. From these orbits, we compute monthly combined GRACE+CHAMP and GRACE+GOCE time-variable gravity fields; sophisticated techniques based on Kalman filtering are applied to reduce noise in the time series. Finally, we infer mass variation in selected areas from to gravity signal. These results are compared to the findings obtained from mass variation detection exploiting CSR-RL05 gravity fields; due to their superior quality (which is due to the fact that they are derived from inter-satellite GRACE measurements), the CSR-RL05 solutions serve as benchmark. Our quantitative assessment shows the potential and limitations of what can be expected from Swarm with regard to surface mass variation monitoring.
NASA Astrophysics Data System (ADS)
Gutknecht, B. D.; Götze, H.-J.; Jahr, T.; Jentzsch, G.; Mahatsente, R.; Zeumann, St.
2014-11-01
It is well known that the quality of gravity modelling of the Earth's lithosphere is heavily dependent on the limited number of available terrestrial gravity data. More recently, however, interest has grown within the geoscientific community to utilise the homogeneously measured satellite gravity and gravity gradient data for lithospheric scale modelling. Here, we present an interdisciplinary approach to determine the state of stress and rate of deformation in the Central Andean subduction system. We employed gravity data from terrestrial, satellite-based and combined sources using multiple methods to constrain stress, strain and gravitational potential energy (GPE). Well-constrained 3D density models, which were partly optimised using the combined regional gravity model IMOSAGA01C (Hosse et al. in Surv Geophys, 2014, this issue), were used as bases for the computation of stress anomalies on the top of the subducting oceanic Nazca plate and GPE relative to the base of the lithosphere. The geometries and physical parameters of the 3D density models were used for the computation of stresses and uplift rates in the dynamic modelling. The stress distributions, as derived from the static and dynamic modelling, reveal distinct positive anomalies of up to 80 MPa along the coastal Jurassic batholith belt. The anomalies correlate well with major seismicity in the shallow parts of the subduction system. Moreover, the pattern of stress distributions in the Andean convergent zone varies both along the north-south and west-east directions, suggesting that the continental fore-arc is highly segmented. Estimates of GPE show that the high Central Andes might be in a state of horizontal deviatoric tension. Models of gravity gradients from the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite mission were used to compute Bouguer-like gradient anomalies at 8 km above sea level. The analysis suggests that data from GOCE add significant value to the interpretation of lithospheric structures, given that the appropriate topographic correction is applied.
Kelvin wave coupling from TIMED and GOCE: Inter/intra-annual variability and solar activity effects
NASA Astrophysics Data System (ADS)
Gasperini, Federico; Forbes, Jeffrey M.; Doornbos, Eelco N.; Bruinsma, Sean L.
2018-06-01
The primary mechanism through which energy and momentum are transferred from the lower atmosphere to the thermosphere is through the generation and propagation of atmospheric waves. It is becoming increasingly evident that a few waves from the tropical wave spectrum preferentially propagate into the thermosphere and contribute to modify satellite drag. Two of the more prominent and well-established tropical waves are Kelvin waves: the eastward-propagating 3-day ultra-fast Kelvin wave (UFKW) and the eastward-propagating diurnal tide with zonal wave number 3 (DE3). In this work, Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperatures at 110 km and Gravity field and steady-state Ocean Circulation Explorer (GOCE) neutral densities and cross-track winds near 260 km are used to demonstrate vertical coupling in this height regime due to the UFKW and DE3. Significant inter- and intra-annual variability is found in DE3 and the UFKW, with evidence of latitudinal broadening and filtering of the latitude structures with height due to the effect of dissipation and mean winds. Additionally, anti-correlation between the vertical penetration of these waves to the middle thermosphere and solar activity level is established and explained through the effect of molecular dissipation.
Basic Radar Altimetry Toolbox: Tools to Use Radar Altimetry for Geodesy
NASA Astrophysics Data System (ADS)
Rosmorduc, V.; Benveniste, J. J.; Bronner, E.; Niejmeier, S.
2010-12-01
Radar altimetry is very much a technique expanding its applications and uses. If quite a lot of efforts have been made for oceanography users (including easy-to-use data), the use of those data for geodesy, especially combined witht ESA GOCE mission data is still somehow hard. ESA and CNES thus had the Basic Radar Altimetry Toolbox developed (as well as, on ESA side, the GOCE User Toolbox, both being linked). The Basic Radar Altimetry Toolbox is an "all-altimeter" collection of tools, tutorials and documents designed to facilitate the use of radar altimetry data. The software is able: - to read most distributed radar altimetry data, from ERS-1 & 2, Topex/Poseidon, Geosat Follow-on, Jason-1, Envisat, Jason- 2, CryoSat and the future Saral missions, - to perform some processing, data editing and statistic, - and to visualize the results. It can be used at several levels/several ways: - as a data reading tool, with APIs for C, Fortran, Matlab and IDL - as processing/extraction routines, through the on-line command mode - as an educational and a quick-look tool, with the graphical user interface As part of the Toolbox, a Radar Altimetry Tutorial gives general information about altimetry, the technique involved and its applications, as well as an overview of past, present and future missions, including information on how to access data and additional software and documentation. It also presents a series of data use cases, covering all uses of altimetry over ocean, cryosphere and land, showing the basic methods for some of the most frequent manners of using altimetry data. It is an opportunity to teach remote sensing with practical training. It has been available from April 2007, and had been demonstrated during training courses and scientific meetings. About 1200 people downloaded it (Summer 2010), with many "newcomers" to altimetry among them. Users' feedbacks, developments in altimetry, and practice, showed that new interesting features could be added. Some have been added and/or improved in version 2. Others are ongoing, some are in discussion. Examples and Data use cases on geodesy will be presented. BRAT is developed under contract with ESA and CNES.
Seasonal Dependence of Geomagnetic Active-Time Northern High-Latitude Upper Thermospheric Winds
NASA Astrophysics Data System (ADS)
Dhadly, Manbharat S.; Emmert, John T.; Drob, Douglas P.; Conde, Mark G.; Doornbos, Eelco; Shepherd, Gordon G.; Makela, Jonathan J.; Wu, Qian; Nieciejewski, Richard J.; Ridley, Aaron J.
2018-01-01
This study is focused on improving the poorly understood seasonal dependence of northern high-latitude F region thermospheric winds under active geomagnetic conditions. The gaps in our understanding of the dynamic high-latitude thermosphere are largely due to the sparseness of thermospheric wind measurements. With current observational facilities, it is infeasible to construct a synoptic picture of thermospheric winds, but enough data with wide spatial and temporal coverage have accumulated to construct a meaningful statistical analysis. We use long-term data from eight ground-based and two space-based instruments to derive climatological wind patterns as a function of magnetic local time, magnetic latitude, and season. These diverse data sets possess different geometries and different spatial and solar activity coverage. The major challenge is to combine these disparate data sets into a coherent picture while overcoming the sampling limitations and biases among them. In our previous study (focused on quiet time winds), we found bias in the Gravity Field and Steady State Ocean Circulation Explorer (GOCE) cross-track winds. Here we empirically quantify the GOCE bias and use it as a correction profile for removing apparent bias before empirical wind formulation. The assimilated wind patterns exhibit all major characteristics of high-latitude neutral circulation. The latitudinal extent of duskside circulation expands almost 10∘ from winter to summer. The dawnside circulation subsides from winter to summer. Disturbance winds derived from geomagnetic active and quiet winds show strong seasonal and latitudinal variability. Comparisons between wind patterns derived here and Disturbance Wind Model (DWM07) (which have no seasonal dependence) suggest that DWM07 is skewed toward summertime conditions.
Global mean dynamic topography based on GOCE data and Wiener filters
NASA Astrophysics Data System (ADS)
Gilardoni, Maddalena; Reguzzoni, Mirko; Albertella, Alberta
2015-04-01
A mean dynamic ocean topography (MDT) has been computed by using a GOCE-only gravity model and a given mean sea surface (MSS) obtained from satellite altimetry. Since the used gravity model, i.e. the fifth release of the time-wise solution covering the full mission lifetime, is truncated at a maximum harmonic degree of 280, the obtained MDT has to be consistently filtered. This has been done globally by using the spherical harmonic representation and following a Wiener minimization principle. This global filtering approach is convenient from the computational point of view but requires to have MDT values all over the Earth surface and therefore to fill the continents with fictitious data. The main improvements with respect to the already presented results are in the MDT filling procedure (to guarantee that the global signal has the same covariance of the one over the oceans), in the error modelling of the input MSS and in the error estimation of the filtered MDT and of the corresponding geostrophic velocities. The impact of GOCE data in the ocean circulation global modelling has been assessed by comparing the pattern of the obtained geostrophic currents with those computed by using EGM2008. Comparisons with independent circulation data based on drifters and other MDT models have been also performed with the aim of evaluating the accuracy of the obtained results.
Investigation of Electrostatic Accelerometer in HUST for Space Science Missions
NASA Astrophysics Data System (ADS)
Bai, Yanzheng; Hu, Ming; Li, Gui; Liu, Li; Qu, Shaobo; Wu, Shuchao; Zhou, Zebing
2014-05-01
High-precision electrostatic accelerometers are significant payload in CHAMP, GRACE and GOCE gravity missions to measure the non-gravitational forces. In our group, space electrostatic accelerometer and inertial sensor based on the capacitive sensors and electrostatic control technique has been investigated for space science research in China such as testing of equivalence principle (TEPO), searching non-Newtonian force in micrometer range, satellite Earth's field recovery and so on. In our group, a capacitive position sensor with a resolution of 10-7pF/Hz1/2 and the μV/Hz1/2 level electrostatic actuator are developed. The fiber torsion pendulum facility is adopt to measure the parameters of the electrostatic controlled inertial sensor such as the resolution, and the electrostatic stiffness, the cross couple between different DOFs. Meanwhile, high voltage suspension and free fall methods are applied to verify the function of electrostatic accelerometer. Last, the engineering model of electrostatic accelerometer has been developed and tested successfully in space and preliminary results are present.
Liu, Gang; Qin, Hongmei; Amano, Tsukuru; Murakami, Takashi; Komatsu, Naoki
2015-10-28
We report on the application of pristine graphene as a drug carrier for phototherapy (PT). The loading of a photosensitizer, chlorin e6 (Ce6), was achieved simply by sonication of Ce6 and graphite in an aqueous solution. During the loading process, graphite was gradually exfoliated to graphene to give its composite with Ce6 (G-Ce6). This one-step approach is considered to be superior to the graphene oxide (GO)-based composites, which required pretreatment of graphite by strong oxidation. Additionally, the directly exfoliated graphene ensured a high drug loading capacity, 160 wt %, which is about 10 times larger than that of the functionalized GO. Furthermore, the Ce6 concentration for killing cells by G-Ce6 is 6-75 times less than that of the other Ce6 composites including GO-Ce6.
NASA Astrophysics Data System (ADS)
Wang, H. B.; Zhao, C. Y.; Zhang, W.; Zhan, J. W.; Yu, S. X.
2015-09-01
The Earth gravitational filed model is a kind of important dynamic model in satellite orbit computation. In recent years, several space gravity missions have obtained great success, prompting a lot of gravitational filed models to be published. In this paper, 2 classical models (JGM3, EGM96) and 4 latest models, including EIGEN-CHAMP05S, GGM03S, GOCE02S, and EGM2008 are evaluated by being employed in the precision orbit determination (POD) and prediction, based on the laser range observation of four low earth orbit (LEO) satellites, including CHAMP, GFZ-1, GRACE-A, and SWARM-A. The residual error of observation in POD is adopted to describe the accuracy of six gravitational field models. We show the main results as follows: (1) for LEO POD, the accuracies of 4 latest models (EIGEN-CHAMP05S, GGM03S, GOCE02S, and EGM2008) are at the same level, and better than those of 2 classical models (JGM3, EGM96); (2) If taking JGM3 as reference, EGM96 model's accuracy is better in most situations, and the accuracies of the 4 latest models are improved by 12%-47% in POD and 63% in prediction, respectively. We also confirm that the model's accuracy in POD is enhanced with the increasing degree and order if they are smaller than 70, and when they exceed 70 the accuracy keeps stable, and is unrelated with the increasing degree, meaning that the model's degree and order truncated to 70 are sufficient to meet the requirement of LEO orbit computation with centimeter level precision.
Suggestions for Improvement of User Access to GOCE L2 Data
NASA Astrophysics Data System (ADS)
Tscherning, C. C.
2011-07-01
ESA's has required that most GOCE L2 products are delivered in XML format. This creates difficulties for the users because a Parser written in Perl is needed to convert the files to files without XML tags. However several products, such as the coefficients of spherical harmonic coefficients are made available on standard form through the International Center for Global Gravity Field Models. The variance-covariance information for the gravity field models is only available without XML tags. It is suggested that all XML products are made available in the Virtual Data Archive as files without tags. This will besides making the data directly usable by a FORTRAN program also reduce the size (storage requirements) of the product to about 30 %. A further reduction of used storage should be made by tuning the number of digits for the individual quantities in the products, so that it corresponds to the actual number of significant digits.
Moho depth model for the Central Asian Orogenic Belt from satellite gravity gradients
NASA Astrophysics Data System (ADS)
Guy, Alexandra; Holzrichter, Nils; Ebbing, Jörg
2017-09-01
The main purpose of this study is to construct a new 3-D model of the Central Asian Orogenic Belt (CAOB) crust, which can be used as a starting point for future lithospheric studies. The CAOB is a Paleozoic accretionary orogen surrounded by the Siberian Craton to the north and the North China and Tarim Cratons to the south. This area is of great interest due to its enigmatic and still not completely understood geodynamic evolution. First, we estimate an initial crustal thickness by inversion of the vertical gravity component of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and DTU10 models. Second, 3-D forward modeling of the GOCE gravity gradients is performed, which determines the topography of the Moho, the geometry, and the density distribution of the deeper parts of the CAOB and its surroundings, taking into account the lateral and vertical density variations of the crust. The model is constrained by seismic refraction, reflection, and receiver function studies and geological studies. In addition, we discuss the isostatic implications of the differences between the seismic Moho and the resulting 3-D gravity Moho, complemented by the analysis of the lithostatic load distribution at the upper mantle level. Finally, the correlation between the contrasting tectonic domains and the thickness of the crust reveals the inheritance of Paleozoic and Mesozoic geodynamics, particularly the magmatic provinces and the orocline which preserve their crustal features.
Sediment basin modeling through GOCE gradients controlled by thermo-isostatic constraints
NASA Astrophysics Data System (ADS)
Pivetta, Tommaso; Braitenberg, Carla
2015-04-01
Exploration of geodynamic and tectonic structures through gravity methods has experienced an increased interest in the recent years thank's to the possibilities offered by satellite gravimetry (e.g. GOCE). The main problem with potential field methods is the non-uniqueness of the underground density distributions that satisfy the observed gravity field. In terrestrial areas with scarce geological and geophysical information, valid constraints to the density model could be obtained from the application of geodynamic models. In this contribution we present the study of the gravity signals associated to the thermo-isostatic McKenzie-model (McKenzie, 1978) that predicts the development of sedimentary basins from the stretching of lithosphere. This model seems to be particularly intriguing for gravity studies as we could obtain estimates of densities and thicknesses of crust and mantle before and after a rifting event and gain important information about the time evolution of the sedimentary basin. The McKenzie-model distinguishes the rifting process into two distinct phases: a syn-rift phase that occurs instantly and is responsible of the basin formation, the thinning of lithosphere and the upwelling of hot asthenosphere. Then a second phase (post-rift), that is time dependent, and predicts further subsidence caused by the cooling of mantle and asthenosphere and subsequently increase in rock density. From the application of the McKenzie-model we have derived density underground distributions for two scenarios: the first scenario involves the lithosphere density distribution immediately after the stretching event; the second refers to the density model when thermal equilibrium between stretched and unstretched lithospheres is achieved. Calculations of gravity anomalies and gravity gradient anomalies are performed at 5km height and at the GOCE mean orbit quota (250km). We have found different gravity signals for syn-rift (gravimetric maximum) and post-rift (gravimetric minimum) scenarios and that satellite measurements are sufficiently precise to discriminate between them. The McKenzie-model is then applied to a real basin in Africa, the Benue Trough, which is an aborted rift that seems to be particularly adapt to be studied with satellite gravity techniques. McKenzie D., 1978, Some remarks on the development of sedimentary basins, Earth and Planetary Science Letters, 40, 25-32
Coastal Sea Level along the North Eastern Atlantic Shelf from Delay Doppler Altimetry
NASA Astrophysics Data System (ADS)
Fenoglio-Marc, L.; Benveniste, J.; Andersen, O. B.; Gravelle, M.; Dinardo, S.; Uebbing, B.; Scharroo, R.; Kusche, J.; Kern, M.; Buchhaupt, C.
2017-12-01
Satellite altimetry data of the CryoSat-2 and Sentinel-3 missions processed with Delay Doppler methodology (DDA) provide improved coastal sea level measurements up to 2-4 km from coast, thanks to an along-track resolution of about 300m and a higher signal to noise ratio. We investigate the 10 Kilometre stripe along the North-Eastern Atlantic shelf from Lisbon to Bergen to detect the possible impacts in sea level change studies of this enhanced dataset. We consider SAR CryoSat-2 and Sentinel-3 altimetry products from the ESA GPOD processor and in-house reduced SAR altimetry (RDSAR) products. Improved processing includes in RDSAR the application of enhanced retrackers for the RDSAR waveform. Improved processing in SAR includes modification both in the generation of SAR waveforms, (as Hamming weighting window on the burst data prior to the azimuth FFT, zero-padding prior to the range FFT, doubling of the extension for the radar range swath) and in the SAMOSA2 retracker. Data cover the full lifetime of CryoSat-2 (6 years) and Sentinel-3 (1 year). Conventional altimetry are from the sea level CCI database. First we analyse the impact of these SAR altimeter data on the sea level trend and on the estimation of vertical motion from the altimeter minus tide gauge differences. VLM along the North-Eastern Atlantic shelf is generally small compared to the North-Western Atlantic Coast VLM, with a smaller signal to noise ratio. Second we investigate impact on the coastal mean sea level surface and the mean dynamic topography. We evaluate a mean surface from the new altimeter data to be combined to state of the art geoid models to derive the mean dynamic topography. We compare the results to existing oceanographic and geodetic mean dynamic topography solutions, both on grid and pointwise at the tide gauge stations. This study is supported by ESA through the Sea Level CCI and the GOCE++DYCOT projects
NASA Astrophysics Data System (ADS)
LI, Honglei; Fang, Jian; Braitenberg, Carla; Wang, Xinsheng
2015-04-01
As the highest, largest and most active plateau on Earth, the Qinghai-Tibet Plateau has a complex crust-mantle structure, especially in its eastern part. In response to the subduction of the lithospheric mantle of the Indian plate, large-scale crustal motion occurs in this area. Despite the many previous studies, geodynamic processes at depth remain unclear. Knowledge of crust and upper mantle density distribution allows a better definition of the deeper geological structure and thus provides critically needed information for understanding of the underlying geodynamic processes. With an unprecedented precision of 1-2 mGal and a spatial resolution better than 100 km, GOCE (Gravity field and steady-state Ocean Circulation Explorer) mission products can be used to constrain the crust-mantle density distribution. Here we used GOCE gravitational gradients at an altitude of 10km after reducing the effects of terrain, sediment thickness variations, and Moho undulations to image the density structures of eastern Tibet up to 200 km depths. We inverted the residual satellite gravitational gradients using a least square approach. The initial density model for the inversion is based on seismic velocities from the tomography. The model is composed of rectangular blocks, having a uniform density, with widths of about 100 km and variable thickness and depths. The thickness of the rectangular cells changes from10 to 60km in accordance with the seismic model. Our results reveal some large-scale, structurally controlled density variations at depths. The lithospheric root defined by higher-density contrast features from southwest to northeast, with shallowing in the central part: base of lithosphere reaches a depth of180 km, less than 100km, and 200 km underneath the Lhasa, Songpan-Ganzi, and Ordos crustal blocks, respectively. However, these depth values only represent a first-order parameterization because they depend on model discretization inherited from the original seismic tomography model. For example, the thickness of the uniform density blocks centered at140 km depth is as large as 60 km. Low-density crustal anomalies beneath the southern Lhasa and Songpan-Ganzi blocks in our model support the idea of weak lower crust and possible crustal flow, as a result of the thermal anomalies caused by the upwelling of hot deep materials. The weak lower crust may cause the decoupling of the upper crust and the mantle. These results are consistent with many other geophysical studies, confirming the effectiveness of the GOCE gravitational gradient data. Using these data in combination with other geodynamic constraints (e.g., gravity and seismic structure and preliminary reference Earth model), an improved dynamic model can be derived.
Software Analysis of New Space Gravity Data for Geophysics and Climate Research
NASA Technical Reports Server (NTRS)
Deese, Rupert; Ivins, Erik R.; Fielding, Eric J.
2012-01-01
Both the Gravity Recovery and Climate Experiment (GRACE) and Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellites are returning rich data for the study of the solid earth, the oceans, and the climate. Current software analysis tools do not provide researchers with the ease and flexibility required to make full use of this data. We evaluate the capabilities and shortcomings of existing software tools including Mathematica, the GOCE User Toolbox, the ICGEM's (International Center for Global Earth Models) web server, and Tesseroids. Using existing tools as necessary, we design and implement software with the capability to produce gridded data and publication quality renderings from raw gravity data. The straight forward software interface marks an improvement over previously existing tools and makes new space gravity data more useful to researchers. Using the software we calculate Bouguer anomalies of the gravity tensor's vertical component in the Gulf of Mexico, Antarctica, and the 2010 Maule earthquake region. These maps identify promising areas of future research.
NASA Astrophysics Data System (ADS)
Hagan, Maura; Häusler, Kathrin; Lu, Gang; Forbes, Jeffrey; Zhang, Xiaoli; Doornbos, Eelco; Bruinsma, Sean
2014-05-01
We present the results of an investigation of the upper atmosphere during April 2010 when it was disturbed by a fast-moving coronal mass ejection. Our study is based on comparative analysis of observations made by the Gravity field and steady-state Ocean Circulation Explorer (GOCE), Challenging Minisatellite Payload (CHAMP), and Gravity Recovery And Climate Experiment (GRACE) satellites and a set of simulations with the National Center for Atmospheric Research (NCAR) thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). We compare and contrast the satellite observations with TIME-GCM results from a realistic simulation based on prevailing meteorological and solar geomagnetic conditions. We diagnose the comparative importance of the upper atmospheric signatures attributable to meteorological forcing with those attributable to storm effects by diagnosing a series of complementary control TIME-GCM simulations. These results also quantify the extent to which lower and middle atmospheric sources of upper atmospheric variability precondition its response to the solar geomagnetic storm.
Nonlinear diffusion filtering of the GOCE-based satellite-only MDT
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Mikula, Karol
2015-04-01
A combination of the GRACE/GOCE-based geoid models and mean sea surface models provided by satellite altimetry allows modelling of the satellite-only mean dynamic topography (MDT). Such MDT models are significantly affected by a stripping noise due to omission errors of the spherical harmonics approach. Appropriate filtering of this kind of noise is crucial in obtaining reliable results. In our study we use the nonlinear diffusion filtering based on a numerical solution to the nonlinear diffusion equation on closed surfaces (e.g. on a sphere, ellipsoid or the discretized Earth's surface), namely the regularized surface Perona-Malik model. A key idea is that the diffusivity coefficient depends on an edge detector. It allows effectively reduce the noise while preserve important gradients in filtered data. Numerical experiments present nonlinear filtering of the satellite-only MDT obtained as a combination of the DTU13 mean sea surface model and GO_CONS_GCF_2_DIR_R5 geopotential model. They emphasize an adaptive smoothing effect as a principal advantage of the nonlinear diffusion filtering. Consequently, the derived velocities of the ocean geostrophic surface currents contain stronger signal.
Enhanced Arctic Mean Sea Surface and Mean Dynamic Topography including retracked CryoSat-2 Data
NASA Astrophysics Data System (ADS)
Andersen, O. B.; Jain, M.; Stenseng, L.; Knudsen, P.
2014-12-01
A reliable mean sea surface (MSS) is essential to derive a good mean dynamic topography (MDT) and for the estimation of short and long-term changes in the sea surface. The lack of satellite radar altimetry observations above 82 degrees latitude means that existing mean sea surface models have been unreliable in the Arctic Ocean. We here present the latest DTU mean sea surface and mean dynamic topography models combining conventional altimetry with retracked CryoSat-2 data to improve the reliability in the Arctic Ocean. For the derivation of a mean dynamic topography the ESA GOCE derived geoid model have been used to constrain the longer wavelength. We present the retracking of C2 SAR data using various retrackes and how we have been able to combine data from various retrackers under various sea ice conditions. DTU13MSS and DTU13MDT are the newest state of the art global high-resolution models including CryoSat-2 data to extend the satellite radar altimetry coverage up to 88 degrees latitude and through combination with a GOCE geoid model completes coverage all the way to the North Pole. Furthermore the SAR and SARin capability of CryoSat-2 dramatically increases the amount of useable sea surface returns in sea-ice covered areas compared to conventional radar altimeters like ENVISAT and ERS-1/2. With the inclusion of CryoSat-2 data the new mean sea surface is improved by more than 20 cm above 82 degrees latitude compared with the previous generation of mean sea surfaces.
GOCE and Future Gravity Missions for Geothermal Energy Exploitation
NASA Astrophysics Data System (ADS)
Pastorutti, Alberto; Braitenberg, Carla; Pivetta, Tommaso; Mariani, Patrizia
2016-08-01
Geothermal energy is a valuable renewable energy source the exploitation of which contributes to the worldwide reduction of consumption of fossil fuels oil and gas. The exploitation of geothermal energy is facilitated where the thermal gradient is higher than average leading to increased surface heat flow. Apart from the hydrologic circulation properties which depend on rock fractures and are important due to the heat transportation from the hotter layers to the surface, essential properties that increase the thermal gradient are crustal thinning and radiogenic heat producing rocks. Crustal thickness and rock composition form the link to the exploration with the satellite derived gravity field, because both induce subsurface mass changes that generate observable gravity anomalies. The recognition of gravity as a useful investigation tool for geothermal energy lead to a cooperation with ESA and the International Renewable Energy Agency (IRENA) that included the GOCE derived gravity field in the online geothermal energy investigation tool of the IRENA database. The relation between the gravity field products as the free air gravity anomaly, the Bouguer and isostatic anomalies and the heat flow values is though not straightforward and has not a unique relationship. It is complicated by the fact that it depends on the geodynamical context, on the geologic context and the age of the crustal rocks. Globally the geological context and geodynamical history of an area is known close to everywhere, so that a specific known relationship between gravity and geothermal potential can be applied. In this study we show the results of a systematic analysis of the problem, including some simulations of the key factors. The study relies on the data of GOCE and the resolution and accuracy of this satellite. We also give conclusions on the improved exploration power of a gravity mission with higher spatial resolution and reduced data error, as could be achieved in principle by flying an atom interferometer sensor on board a satellite.
Isostatic GOCE Moho model for Iran
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi; Ebadi, Sahar; Tenzer, Robert
2017-05-01
One of the major issues associated with a regional Moho recovery from the gravity or gravity-gradient data is the optimal choice of the mean compensation depth (i.e., the mean Moho depth) for a certain area of study, typically for orogens characterised by large Moho depth variations. In case of selecting a small value of the mean compensation depth, the pattern of deep Moho structure might not be reproduced realistically. Moreover, the definition of the mean compensation depth in existing isostatic models affects only low-degrees of the Moho spectrum. To overcome this problem, in this study we reformulate the Sjöberg and Jeffrey's methods of solving the Vening-Meinesz isostatic problem so that the mean compensation depth contributes to the whole Moho spectrum. Both solutions are then defined for the vertical gravity gradient, allowing estimating the Moho depth from the GOCE satellite gravity-gradiometry data. Moreover, gravimetric solutions provide realistic results only when a priori information on the crust and upper mantle structure is known (usually from seismic surveys) with a relatively good accuracy. To investigate this aspect, we formulate our gravimetric solutions for a variable Moho density contrast to account for variable density of the uppermost mantle below the Moho interface, while taking into consideration also density variations within the sediments and consolidated crust down to the Moho interface. The developed theoretical models are applied to estimate the Moho depth from GOCE data at the regional study area of the Iranian tectonic block, including also parts of surrounding tectonic features. Our results indicate that the regional Moho depth differences between Sjöberg and Jeffrey's solutions, reaching up to about 3 km, are caused by a smoothing effect of Sjöberg's method. The validation of our results further shows a relatively good agreement with regional seismic studies over most of the continental crust, but large discrepancies are detected under the Oman Sea and the Makran subduction zone. We explain these discrepancies by a low quality of seismic data offshore.
NASA Astrophysics Data System (ADS)
Fukuda, Y.; Nogi, Y.; Matsuzaki, K.
2012-12-01
Syowa is the Japanese Antarctic wintering station in Lützow-Holm Bay, East Antarctica. The area around the station is considered to be a key for investigating the formation of Gondwana, because reconstruction models suggest a junction of the continents locates in the area. It is also important from a glaciological point of view, because there locates the Shirase Glacier, one of the major glaciers in Antarctica, near the station. Therefore the Japanese Antarctic Research Expedition (JARE) has been conducting in-situ gravity measurements in the area for a long period. The data sets accumulated are land gravity data since 1967, surface ship data since 1985, and airborne gravity data in 2006. However these in-situ gravity data usually suffered from the effects of instrumental drifts and lack of reference points, their accuracies are decreasing toward the longer wavelength more than several tens km. In particular in Antarctica where very few gravity reference points are available, the long wavelength accuracy and/or consistency among the data sets are quite limited. GOCE (Gravity field and steady-state Ocean Circulation Explorer) satellite launched in March 2009 by ESA (European Space Agency) aims at improving static gravity fields, in particular at short wavelengths. In addition to its low-altitude orbit (250km), the sensitive gravity gradiometer installed is expected to reveal 1 mgal gravity anomalies at the spatial resolution of 100km (half wavelength). Actually recently released GOCE EGMs (Earth Gravity Models) have improved the accuracy of the static gravity filed tremendously. These EGMs are expected to serve as the long wavelength references for the in-situ gravity data. Thus, firstly, we aims at determining an improved gravity fields around Syowa by combining the JARE gravity data and the recent EGMs. And then, using the gravity anomalies, we determine the subsurface density structures. We also evaluated the impacts of the EGMs for estimating the density structures.
Krochmal, Magdalena; Cisek, Katryna; Markoska, Katerina; Spasovski, Goce; Vlahou, Antonia
2015-01-01
A Workshop and Regular Meeting of the Marie Curie Training and Research Programs iMODECKD (Identification of the Molecular Determinants of established Chronic Kidney Disease) and BCMolMed (Molecular Medicine for Bladder Cancer) was held from 20-22 March at the Macedonian Academy of Science and Arts (MASA). The meeting was hosted by the participating center University of Skopje (SKO) - Goce Spasovski and MASA - Momir Polenakovic (R. Macedonia). The representative from MASA proteomic research center - Katerina Davalieva (R. Macedonia) had presentation on proteomic research in prostate cancer (PCa). 40 researchers from 13 different countries participated at the meeting. The Workshop was devoted on "Chronic Kidney Disease: Clinical Management issues", and consisted of 15 oral presentations given by nephrologists and experts in the field of CKD. Raymond Vanholder (Belgium) - past president of ERA-EDTA had a keynote lecture on "CKD: Questions that need to be answered and are not (or at least not entirely)". The workshop continued in four sessions with lectures from Alberto Ortiz (Spain), Olivera Stojceva-Taneva (R. Macedonia), Dimitrios Goumenos (Greece), Joachim Beige (Germany), Marian Klinger (Poland), Goce Spasovski (R. Macedonia), Joachim Jankowski (Germany), Adalbert Schiller (Romania), Robert Johnson (USA), Franco Ferrario (Italy), Ivan Rychlik (Czech Republic), Fulvio Magni (Italy) and Giovambattista Capasso (Italy), all covering a training theme. Within the meeting there were two lectures on complimentary skills for ethics in science and career advancement from two principal investigators - Goce Spasovski (R. Macedonia) and Joost Schanstra (France). During the Regular Meeting, 13 PhD students i.e. Early Stage Researchers and one Experienced Researcher from both Programs presented their work and progress within iMODE-CKD and BCMolMed projects. This meeting was a great opportunity to exchange experience and ideas in the field of systems biology approaches and translational medicine and planning future collaboration.
Tsunami Ionospheric warning and Ionospheric seismology
NASA Astrophysics Data System (ADS)
Lognonne, Philippe; Rolland, Lucie; Rakoto, Virgile; Coisson, Pierdavide; Occhipinti, Giovanni; Larmat, Carene; Walwer, Damien; Astafyeva, Elvira; Hebert, Helene; Okal, Emile; Makela, Jonathan
2014-05-01
The last decade demonstrated that seismic waves and tsunamis are coupled to the ionosphere. Observations of Total Electron Content (TEC) and airglow perturbations of unique quality and amplitude were made during the Tohoku, 2011 giant Japan quake, and observations of much lower tsunamis down to a few cm in sea uplift are now routinely done, including for the Kuril 2006, Samoa 2009, Chili 2010, Haida Gwai 2012 tsunamis. This new branch of seismology is now mature enough to tackle the new challenge associated to the inversion of these data, with either the goal to provide from these data maps or profile of the earth surface vertical displacement (and therefore crucial information for tsunami warning system) or inversion, with ground and ionospheric data set, of the various parameters (atmospheric sound speed, viscosity, collision frequencies) controlling the coupling between the surface, lower atmosphere and the ionosphere. We first present the state of the art in the modeling of the tsunami-atmospheric coupling, including in terms of slight perturbation in the tsunami phase and group velocity and dependance of the coupling strength with local time, ocean depth and season. We then show the confrontation of modelled signals with observations. For tsunami, this is made with the different type of measurement having proven ionospheric tsunami detection over the last 5 years (ground and space GPS, Airglow), while we focus on GPS and GOCE observation for seismic waves. These observation systems allowed to track the propagation of the signal from the ground (with GPS and seismometers) to the neutral atmosphere (with infrasound sensors and GOCE drag measurement) to the ionosphere (with GPS TEC and airglow among other ionospheric sounding techniques). Modelling with different techniques (normal modes, spectral element methods, finite differences) are used and shown. While the fits of the waveform are generally very good, we analyse the differences and draw direction of future studies and improvements, enabling the integration of lateral variations of the solid earth, bathymetry or atmosphere, finite model sources, non-linearity of the waves and better attenuation and coupling processes. All these effects are revealed by phase or amplitude discrepancies in selected observations. We then present goals and first results of source inversions, with a focus on estimations of the sea level uplift location and amplitude, either by using GPS networks close from the epicentre or, for tsunamis, GPS of the Hawaii Islands.
Hybrid Atom Electrostatic System for Satellite Geodesy
NASA Astrophysics Data System (ADS)
Zahzam, Nassim; Bidel, Yannick; Bresson, Alexandre; Huynh, Phuong-Anh; Liorzou, Françoise; Lebat, Vincent; Foulon, Bernard; Christophe, Bruno
2017-04-01
The subject of this poster comes within the framework of new concepts identification and development for future satellite gravity missions, in continuation of previously launched space missions CHAMP, GRACE, GOCE and ongoing and prospective studies like NGGM, GRACE 2 or E-GRASP. We were here more focused on the inertial sensors that complete the payload of such satellites. The clearly identified instruments for space accelerometry are based on the electrostatic technology developed for many years by ONERA and that offer a high level of performance and a high degree of maturity for space applications. On the other hand, a new generation of sensors based on cold atom interferometry (AI) is emerging and seems very promising in this context. These atomic instruments have already demonstrated on ground impressive results, especially with the development of state-of-the-art gravimeters, and should reach their full potential only in space, where the microgravity environment allows long interaction times. Each of these two types of instruments presents their own advantages which are, for the electrostatic sensors (ES), their demonstrated short term sensitivity and their high TRL, and for AI, amongst others, the absolute nature of the measurement and therefore no need for calibration processes. These two technologies seem in some aspects very complementary and a hybrid sensor bringing together all their assets could be the opportunity to take a big step in this context of gravity space missions. We present here the first experimental association on ground of an electrostatic accelerometer and an atomic accelerometer and underline the interest of calibrating the ES instrument with the AI. Some technical methods using the ES proof-mass as the Raman Mirror seem very promising to remove rotation effects of the satellite on the AI signal. We propose a roadmap to explore further in details and more rigorously this attractive hybridization scheme in order to assess its potential for a future geodesy space mission with theoretical and experimental work.
Assessing New GRAV-D Airborne Gravimetry Collected over the United States
NASA Astrophysics Data System (ADS)
Holmes, S. A.; Li, X.; Roman, D. R.
2013-12-01
The U.S. National Geodetic Survey [NGS], through their Gravity for the Redefinition of the American Vertical Datum [GRAV-D] program, is updating its terrestrial gravimetry holdings by flying new airborne gravity surveys over a large fraction of the USA and its territories. By 2020, NGS intends that all orthometric heights in the USA will be determined in the field by using a reliable national gravimetric geoid model to transform from geodetic heights obtained from GPS. Towards this end, the newly-collected airborne-gravimety is repeatedly evaluated by using it to support experimental gravitational models and gravimetric geoids, and then comparing these against independent data sets, such as ';satgrav' models (GRACE/GOCE), GPS/Leveling, astronomical vertical defections, and others. Here we show some results from these tests for GRAV-D airborne gravimetry collected over 2012/2013.
INSIGHT (interaction of low-orbiting satellites with the surrounding ionosphere and thermosphere)
NASA Astrophysics Data System (ADS)
Schlicht, Anja; Reussner, Elisabeth; Lühr, Hermann; Stolle, Claudia; Xiong, Chao; Schmidt, Michael; Blossfeld, Mathis; Erdogan, Eren; Pancetta, Francesca; Flury, Jakob
2016-04-01
In the framework of the DFG special program "Dynamic Earth" the project INSIGHT, started in September 2015, is studying the interactions between the ionosphere and thermosphere as well as the role of the satellites and their instruments in observing the space environment. Accelerometers on low-Earth orbiters (LEOs) are flown to separate non-gravitational forces acting on the satellite from influences of gravitational effects. Amongst others these instruments provide valuable information for improving our understanding of thermospheric properties like densities and winds. An unexpected result, for example, is the clear evidence of geomagnetic field control on the neutral upper atmosphere. The charged particles of the ionosphere act as mediators between the magnetic field and the thermosphere. In the framework of INSIGHT the climatology of the thermosphere will be established and the coupling between the ionosphere and thermosphere is studied. There are indications that the accelerometers are influenced by systematic errors not identified up to now. For GRACE it is one of the discussed reasons, why this mission so far did not reach the baseline accuracy. Beutler et al. 2010 discussed the limited use of the GRACE accelerometer measurements in comparison to stochastic pulses in gravity field recovery. Analysis of the accelerometer measurements show many structures in the high frequency region which can be traced back to switching processes of electric circuits in the spacecraft, like heater and magnetic torquer switching, or so called twangs, which can be associated with discharging of non-conducting surfaces of the satellite. As all observed signals have the same time dependency a common origin is very likely, namely the coupling of time variable electric currents into the accelerometer signal. In GOCE gravity field gradients non-gravitational signatures around the magnetic poles are found indicating that even at lower frequencies problems occur. INSIGHT will identify systematic errors in the accelerometer measurements and establish an algorithm to separate these errors from real accelerations with the analysis of satellite rotations on GOCE. A transfer to other accelerometer missions will be studied. Accelerometer missions are characterized by satellites of a complex geometry and surface structure making it necessary to take their shape and surface interactions into account. On the other hand accelerometers have to be calibrated in space as biases and bias drifts are inherent. These two facts make it difficult to scale thermospheric densities. To overcome this problem a high precision orbit determination of satellites of simpler structure is more suitable. In the framework of INSIGHT a multi-satellite solution of satellite laser ranging (SLR) measurements is aimed for absolute density determination of the thermosphere. Besides, due to the coupling processes between the ionosphere and thermosphere it shall be studied how ionospheric target quantities such as the electron density can be used to improve thermospheric density modeling. This presentation provides the overall structure of the project INSIGHT as well as first results.
Mean Dynamic Topography of the Arctic Ocean
NASA Technical Reports Server (NTRS)
Farrell, Sinead Louise; Mcadoo, David C.; Laxon, Seymour W.; Zwally, H. Jay; Yi, Donghui; Ridout, Andy; Giles, Katherine
2012-01-01
ICESat and Envisat altimetry data provide measurements of the instantaneous sea surface height (SSH) across the Arctic Ocean, using lead and open water elevation within the sea ice pack. First, these data were used to derive two independent mean sea surface (MSS) models by stacking and averaging along-track SSH profiles gathered between 2003 and 2009. The ICESat and Envisat MSS data were combined to construct the high-resolution ICEn MSS. Second, we estimate the 5.5-year mean dynamic topography (MDT) of the Arctic Ocean by differencing the ICEn MSS with the new GOCO02S geoid model, derived from GRACE and GOCE gravity. Using these satellite-only data we map the major features of Arctic Ocean dynamical height that are consistent with in situ observations, including the topographical highs and lows of the Beaufort and Greenland Gyres, respectively. Smaller-scale MDT structures remain largely unresolved due to uncertainties in the geoid at short wavelengths.
Rigorous covariance propagation of geoid errors to geodetic MDT estimates
NASA Astrophysics Data System (ADS)
Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.
2012-04-01
The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.
GOCE gravity gradient data for lithospheric modeling - From well surveyed to frontier areas
NASA Astrophysics Data System (ADS)
Bouman, J.; Ebbing, J.; Gradmann, S.; Fuchs, M.; Fattah, R. Abdul; Meekes, S.; Schmidt, M.; Lieb, V.; Haagmans, R.
2012-04-01
We explore how GOCE gravity gradient data can improve modeling of the Earth's lithosphere and thereby contribute to a better understanding of the Earth's dynamic processes. The idea is to invert satellite gravity gradients and terrestrial gravity data in the well explored and understood North-East Atlantic Margin and to compare the results of this inversion, providing improved information about the lithosphere and upper mantle, with results obtained by means of models based upon other sources like seismics and magnetic field information. Transfer of the obtained knowledge to the less explored Rub' al Khali desert is foreseen. We present a case study for the North-East Atlantic margin, where we analyze the use of satellite gravity gradients by comparison with a well-constrained 3D density model that provides a detailed picture from the upper mantle to the top basement (base of sediments). The latter horizon is well resolved from gravity and especially magnetic data, whereas sedimentary layers are mainly constrained from seismic studies, but do in general not show a prominent effect in the gravity and magnetic field. We analyze how gravity gradients can increase confidence in the modeled structures by calculating a sensitivity matrix for the existing 3D model. This sensitivity matrix describes the relation between calculated gravity gradient data and geological structures with respect to their depth, extent and relative density contrast. As the sensitivity of the modeled bodies varies for different tensor components, we can use this matrix for a weighted inversion of gradient data to optimize the model. This sensitivity analysis will be used as input to study the Rub' al Khali desert in Saudi Arabia. In terms of modeling and data availability this is a frontier area. Here gravity gradient data will be used to better identify the extent of anomalous structures within the basin, with the goal to improve the modeling for hydrocarbon exploration purposes.
Local recovery of lithospheric stress tensor from GOCE gravitational tensor
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi
2017-04-01
The sublithospheric stress due to mantle convection can be computed from gravity data and propagated through the lithosphere by solving the boundary-value problem of elasticity for the Earth's lithosphere. In this case, a full tensor of stress can be computed at any point inside this elastic layer. Here, we present mathematical foundations for recovering such a tensor from gravitational tensor measured at satellite altitudes. The mathematical relations will be much simpler in this way than the case of using gravity data as no derivative of spherical harmonics (SHs) or Legendre polynomials is involved in the expressions. Here, new relations between the SH coefficients of the stress and gravitational tensor elements are presented. Thereafter, integral equations are established from them to recover the elements of stress tensor from those of the gravitational tensor. The integrals have no closed-form kernels, but they are easy to invert and their spatial truncation errors are reducible. The integral equations are used to invert the real data of the gravity field and steady-state ocean circulation explorer mission (GOCE), in 2009 November, over the South American plate and its surroundings to recover the stress tensor at a depth of 35 km. The recovered stress fields are in good agreement with the tectonic and geological features of the area.
Time-lapse Inversion of Electrical Resistivity Data
NASA Astrophysics Data System (ADS)
Nguyen, F.; Kemna, A.
2005-12-01
Time-lapse geophysical measurements (also known as monitoring, repeat or multi-frame survey) now play a critical role for monitoring -non-destructively- changes induced by human, as reservoir compaction, or to study natural processes, as flow and transport in porous media. To invert such data sets into time-varying subsurface properties, several strategies are found in different engineering or scientific fields (e.g., in biomedical, process tomography, or geophysical applications). Indeed, for time-lapse surveys, the data sets and the models at each time frame have the particularity to be closely related to their "neighbors", if the process does not induce chaotic or very high variations. Therefore, the information contained in the different frames can be used for constraining the inversion in the others. A first strategy consists in imposing constraints to the model based on prior estimation, a priori spatiotemporal or temporal behavior (arbitrary or based on a law describing the monitored process), restriction of changes in certain areas, or data changes reproducibility. A second strategy aims to invert directly the model changes, where the objective function penalizes those models whose spatial, temporal, or spatiotemporal behavior differs from a prior assumption or from a computed a priori. Clearly, the incorporation of time-lapse a priori information, determined from data sets or assumed, in the inversion process has been proven to improve significantly the resolving capability, mainly by removing artifacts. However, there is a lack of comparison of these methods. In this paper, we focus on Tikhonov-like inversion approaches for electrical tomography imaging to evaluate the capability of the different existing strategies, and to propose new ones. To evaluate the bias inevitably introduced by time-lapse regularization, we quantified the relative contribution of the different approaches to the resolving power of the method. Furthermore, we incorporated different noise levels and types (random and/or systematic) to determine the strategies' ability to cope with real data. Introducing additional regularization terms yields also more regularization parameters to compute. Since this is a difficult and computationally costly task, we propose that it should be proportional to the velocity of the process. To achieve these objectives, we tested the different methods using synthetic models, and experimental data, taking noise and error propagation into account. Our study shows that the choice of the inversion strategy highly depends on the nature and magnitude of noise, whereas the choice of the regularization term strongly influences the resulting image according to the a priori assumption. This study was developed under the scope of the European project ALERT (GOCE-CT-2004-505329).
Swarm- Validation of Star Tracker and Accelerometer Data
NASA Astrophysics Data System (ADS)
Schack, Peter; Schlicht, Anja; Pail, Roland; Gruber, Thomas
2016-08-01
The ESA Swarm mission is designed to advance studies in the field of magnetosphere, thermosphere and gravity field. To be fortunate on this task precise knowledge of the orientation of the Swarm satellites is required together with knowledge about external forces acting on the satellites. The key sensors providing this information are the star trackers and the accelerometers. Based on star tracker studies conducted by the Denmark Technical University (DTU), we found interesting patterns in the interboresight angles on all three satellites, which are partly induced by temperature alterations. Additionally, structures of horizontal stripes seem to be caused by the unique distribution of observed stars on the charge-coupled device of the star trackers. Our accelerometer analyses focus on spikes and pulses in the observations. Those short term events on Swarm might originate from electrical processes introduced by sunlight illuminating the nadir foil. Comparisons to GOCE and GRACE are included.
Geocenter Coordinates from a Combined Processing of LEO and Ground-based GPS Observations
NASA Astrophysics Data System (ADS)
Männel, Benjamin; Rothacher, Markus
2017-04-01
The GPS observations provided by the global IGS (International GNSS Service) tracking network play an important role for the realization of a unique terrestrial reference frame that is accurate enough to allow the monitoring of the Earth's system. Combining these ground-based data with GPS observations tracked by high-quality dual-frequency receivers on-board Low Earth Orbiters (LEO) might help to further improve the realization of the terrestrial reference frame and the estimation of the geocenter coordinates, GPS satellite orbits and Earth rotation parameters (ERP). To assess the scope of improvement, we processed a network of 50 globally distributed and stable IGS-stations together with four LEOs (GRACE-A, GRACE-B, OSTM/Jason-2 and GOCE) over a time interval of three years (2010-2012). To ensure fully consistent solutions the zero-difference phase observations of the ground stations and LEOs were processed in a common least-square adjustment, estimating GPS orbits, LEO orbits, station coordinates, ERPs, site-specific tropospheric delays, satellite and receiver clocks and ambiguities. We present the significant impact of the individual LEOs and a combination of all four LEOs on geocenter coordinates derived by using a translational approach (also called network shift approach). In addition, we present geocenter coordinates derived from the same set of GPS observations by using a unified approach. This approach combines the translational and the degree-one approach by estimating translations and surface deformations simultaneously. Based on comparisons against each other and against geocenter time series derived by other techniques the effect of the selected approach is assessed.
Thermospheric recovery during the 5 April 2010 geomagnetic storm
NASA Astrophysics Data System (ADS)
Sheng, Cheng; Lu, Gang; Solomon, Stanley C.; Wang, Wenbin; Doornbos, Eelco; Hunt, Linda A.; Mlynczak, Martin G.
2017-04-01
Thermospheric temperature and density recovery during the 5 April 2010 geomagnetic storm has been investigated in this study. Neutral density recovery as revealed by Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIEGCM) simulations was slower than observations from GOCE, CHAMP, and GRACE satellites, suggesting that the cooling processes may not be fully represented in the model. The NO radiative cooling rate in TIEGCM was also compared with TIMED/SABER measurements along satellite orbits during this storm period. It was found that the model overestimated the NO cooling rate at low latitudes and underestimated it at high latitudes. The effects of particle precipitation on NO number density and NO cooling rate at high latitudes were examined in detail. Model experiments showed that while NO number density and NO cooling rate do change with different specifications of the characteristic energy of auroral precipitating electrons, neutral temperature and density recovery remain more or less the same. The reaction rates of key NO chemistry were tested as well, and the NO number density between 110 and 150 km was found to be very sensitive to the reaction rate of N(2D) + O2 → NO + O. A temperature-dependent reaction rate for this reaction proposed by Duff et al. (2003) brought the TIEGCM NO cooling rate at high latitudes closer to the SABER observations. With the temperature-dependent reaction rate, the neutral density recovery time became quite close to the observations in the high-latitude Southern Hemisphere. But model-data discrepancies still exist at low latitudes and in the Northern Hemisphere, which calls for further investigation.
Orbital Gravity Gradiometry Beyond GOCE: Mission Concepts
NASA Technical Reports Server (NTRS)
Shirron, Peter J.; DiPirro, Michael J.; Canavan, Edgar R.; Paik, Ho Jung; Moody, M. Vol; Venkateswara, Krishna Y.; Han, Shin-Chan; Ditmar, Pavel; Klees, Roland; Jekeli, Christopher;
2010-01-01
Significant advances in the technologies needed for space-based cryogenic instruments have been made in the last decade, including cryocoolers, spacecraft architectures and cryogenic amplifiers. These enable considerably more complex instruments to be put into orbit for long-duration missions. One such instrument is the Superconducting Gravity Gradiometer (SGG) developed by Paik, et al. A magnetically levitated version is under consideration for a follow-on mission to GRACE (Gravity Recovery and Climate Experiment) and GOCE (Gravity field and steady-state Ocean Circulation Explorer). With its inherently greater rejection of common mode accelerations and ability to cancel the coupling of angular accelerations into the gradient signal, the SGG can achieve [an accuracy of] 0.01 milli-Eotvos (gravitational gradient of the Earth) divided by the square root of frequency in hertz, with requirements for attitude control that can be met with existing spacecraft. In addition, the use of a cryocooler for cooling the instrument will alleviate the previously severe constraint on mission lifetime imposed by the use of superfluid helium,. enabling mission durations in the 5-10 year range. Studies are underway to determine requirements for orbit (polar versus sun-synchronous), altitude (which affects spacecraft drag), instrument temperature and stability, cryocooler vibration control, and control and readout electronics. These will be used to determine the SGG's sensitivity and ultimate resolution for gravity recovery. This paper will discuss preliminary instrument and spacecraft design, and toplevel mission requirements.
Analysis of the vibration environment induced on spacecraft components by hypervelocity impact
NASA Astrophysics Data System (ADS)
Pavarin, Daniele
2009-06-01
This paper reports the result achieved within the study ``Spacecraft Disturbances from Hypervelocity Impact'', performed by CISAS and Thales-Alenia Space Italia under European Space Agency contract. The research project investigated the perturbations produced on spacecraft internal components as a consequence of hypervelocity impacts of micrometeoroids and orbital debris on the external walls of the vehicle. Objective of the study was: (i) to set-up a general numerical /experimental procedure to investigate the vibration induced by hypervelocity impact, (ii) to analyze the GOCE mission in order to asses whether the vibration environment induce by the impact of orbital debris and micrometeoroids could jeopardize the mission. The research project was conducted both experimentally and numerically, performing a large number of impact tests on GOCE-like structural configurations and extrapolating the experimental results via numerical simulations based on hydrocode calculations, finite element and statistical energy analysis. As a result, a database was established which correlates the impact conditions in the experimental range (0.6 to 2.3 mm projectiles at 2.5 to 5 km/s) with the shock spectra on selected locations on various types of structural models.The main out coming of the study are: (i) a wide database reporting acceleration values on a wide range of impact condition, (ii) a general numerical methodology to investigate disturbances induced by space debris and micrometeoroids on general satellite structures.
NASA Astrophysics Data System (ADS)
Barantseva, Olga; Artemieva, Irina; Thybo, Hans; Herceg, Matija
2015-04-01
We present the results from modelling the gravity and density structure of the upper mantle for the off-shore area of the North Atlantic region. The crust and upper mantle of the region is expected to be anomalous: Part of the region affected by the Icelandic plume has an anomalously shallow bathymetry, whereas the northern part of the region is characterized by ultraslow spreading. In order to understand the links between deep geodynamical processes that control the spreading rate, on one hand, and their manifestations such as oceanic floor bathymetry and heat flow, on the other hand, we model the gravity and density structure of the upper mantle from satellite gravity data. The calculations are based on interpretation of GOCE gravity satellite data for the North Atlantics. To separate the gravity signal responsible for density anomalies within the crust and upper mantle, we subtract the lower harmonics caused by deep density structure of the Earth (the core and the lower mantle). The gravity effect of the upper mantle is calculated by subtracting the gravity effect of the crust for two crustal models. We use a recent regional seismic model for the crustal structure (Artemieva and Thybo, 2013) based om seismic data together with borehole data for sediments. For comparison, similar results are presented for the global CRUST 1.0 model as well (Laske, 2013). The conversion of seismic velocity data for the crustal structure to crustal density structure is crucial for the final results. We use a combination of Vp-to-density conversion based on published laboratory measurements for the crystalline basement (Ludwig, Nafe, Drake, 1970; Christensen and Mooney, 1995) and for oceanic sediments and oceanic crust based on laboratory measurements for serpentinites and gabbros from the Mid-Atlantic Ridge (Kelemen et al., 2004). Also, to overcome the high degree of uncertainty in Vp-to-density conversion, we account for regional tectonic variations in the Northern Atlantics as constrained by numerous published seismic profiles and potential-field models across the Norwegian off-shore crust (e.g. Breivik et al., 2005, 2007). The results demonstrate the presence of strong gravity and density heterogeneity of the upper mantle in the North Atlantic region. In particular, there is a sharp contrast at the continent-ocean transition, which also allows for recognising mantle gravity anomalies associated with continental fragments and with anomalous oceanic lithosphere.
SAPS effects on thermospheric winds during the 17 March 2013 storm
NASA Astrophysics Data System (ADS)
Sheng, C.; Lu, G.; Wang, W.; Doornbos, E.; Talaat, E. R.
2017-12-01
Strong subauroral polarization streams (SAPS) were observed by DMSP satellites during the main phase of the 17 March 2013 geomagnetic storm. Both DMSP F18 and GOCE satellites sampled at 19 MLT during this period, providing near-simultaneous measurements of ion drifts and neutral winds near dusk. The fortuitous satellite conjunction allows us to directly examine the SAPS effects on thermospheric winds. In addition, two sets of model runs were carried out for this event: (1) the standard TIEGCM run with high-latitude forcing; (2) the SAPS-TIEGCM run by incoporating an empirical model of SAPS in the subauroral zone. The difference between these two runs represents the influence of SAPS forcing. In particular, we examine ion-neutral coupling at subauroral latitudes through detailed forcing term analysis to determine how the SAPS-related strong westward ion drifts alter thermospheric winds.
Nepal and Papua Airborne Gravity Surveys
NASA Astrophysics Data System (ADS)
Olesen, A. V.; Forsberg, R.; Kasenda, F.; Einarsson, I.; Manandhar, N.
2011-12-01
Airborne gravimetry offers a fast and economic way to cover vast areas and it allows access to otherwise difficult accessible areas like mountains, jungles and the near coastal zone. It has the potential to deliver high resolution and bias free data that may bridge the spectral gap between global satellite gravity models and the high resolution gravity information embedded in digital terrain models. DTU Space has for more than a decade done airborne gravity surveys in many parts of the world. Most surveys were done with a LaCoste & Romberg S-meter updated for airborne use. This instrument has proven to deliver near bias free data when properly processed. A Chekan AM gravimeter was recently added to the airborne gravity mapping system and will potentially enhance the spatial resolution and the robustness of the system. This paper will focus on results from two recent surveys over Nepal, flown in December 2010, and over Papua (eastern Indonesia), flown in May and June 2011. Both surveys were flown with the new double gravimeter setup and initial assessment of system performance indicates improved spatial resolution compared to the single gravimeter system. Comparison to EGM08 and to the most recent GOCE models highlights the impact of the new airborne gravity data in both cases. A newly computed geoid model for Nepal based on the airborne data allows for a more precise definition of the height of Mt. Everest in a global height system. This geoid model suggests that the height of Mt. Everest should be increased by approximately 1 meter. The paper will also briefly discuss system setup and will highlight a few essential processing steps that ensure that bias problems are minimized and spatial resolution enhanced.
Geocenter variations derived from a combined processing of LEO- and ground-based GPS observations
NASA Astrophysics Data System (ADS)
Männel, Benjamin; Rothacher, Markus
2017-08-01
GNSS observations provided by the global tracking network of the International GNSS Service (IGS, Dow et al. in J Geod 83(3):191-198, 2009) play an important role in the realization of a unique terrestrial reference frame that is accurate enough to allow a detailed monitoring of the Earth's system. Combining these ground-based data with GPS observations tracked by high-quality dual-frequency receivers on-board low earth orbiters (LEOs) is a promising way to further improve the realization of the terrestrial reference frame and the estimation of geocenter coordinates, GPS satellite orbits and Earth rotation parameters. To assess the scope of the improvement on the geocenter coordinates, we processed a network of 53 globally distributed and stable IGS stations together with four LEOs (GRACE-A, GRACE-B, OSTM/Jason-2 and GOCE) over a time interval of 3 years (2010-2012). To ensure fully consistent solutions, the zero-difference phase observations of the ground stations and LEOs were processed in a common least-squares adjustment, estimating all the relevant parameters such as GPS and LEO orbits, station coordinates, Earth rotation parameters and geocenter motion. We present the significant impact of the individual LEO and a combination of all four LEOs on the geocenter coordinates. The formal errors are reduced by around 20% due to the inclusion of one LEO into the ground-only solution, while in a solution with four LEOs LEO-specific characteristics are significantly reduced. We compare the derived geocenter coordinates w.r.t. LAGEOS results and external solutions based on GPS and SLR data. We found good agreement in the amplitudes of all components; however, the phases in x- and z-direction do not agree well.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
NASA Astrophysics Data System (ADS)
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.
NASA Astrophysics Data System (ADS)
Grombein, T.; Seitz, K.; Heck, B.
2013-12-01
In general, national height reference systems are related to individual vertical datums defined by specific tide gauges. The discrepancy of these vertical datums causes height system biases that range in an order of 1-2 m at a global scale. Continental height systems can be connected by spirit leveling and gravity measurements along the leveling lines as performed for the definition of the European Vertical Reference Frame. In order to unify intercontinental height systems, an indirect connection is needed. For this purpose, global geopotential models derived from recent satellite missions like GOCE provide an important contribution. However, to achieve a highly-precise solution, a combination with local terrestrial gravity data is indispensable. Such combinations result in the solution of a Geodetic Boundary Value Problem (GBVP). In contrast to previous studies, mostly related to the traditional (scalar) free GBVP, the present paper discusses the use of the fixed GBVP for height system unification, where gravity disturbances instead of gravity anomalies are applied as boundary values. The basic idea of our approach is a conversion of measured gravity anomalies to gravity disturbances, where unknown datum parameters occur that can be associated with height system biases. In this way, the fixed GBVP can be extended by datum parameters for each datum zone. By evaluating the GBVP at GNSS/leveling benchmarks, the unknown datum parameters can be estimated in a least squares adjustment. Beside the developed theory, we present numerical results of a case study based on the spherical fixed GBVP and boundary values simulated by the use of the global geopotential model EGM2008. In a further step, the impact of approximations like linearization as well as topographic and ellipsoidal effects is taken into account by suitable reduction and correction terms.
2017 Updates: Earth Gravitational Model 2020
NASA Astrophysics Data System (ADS)
Barnes, D. E.; Holmes, S. A.; Ingalls, S.; Beale, J.; Presicci, M. R.; Minter, C.
2017-12-01
The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new `Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and shipborne) of gravimetric data over specific geographical areas (Antarctica, Greenland …), will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors. Public release number 15-564
NASA Astrophysics Data System (ADS)
Barzaghi, Riccardo; Vergos, Georgios S.; Albertella, Alberta; Carrion, Daniela; Cazzaniga, Noemi; Tziavos, Ilias N.; Grigoriadis, Vassilios N.; Natsiopoulos, Dimitrios A.; Bruinsma, Sean; Bonvalot, Sylvain; Lequentrec-Lalancette, Marie-Françoise; Bonnefond, Pascal; Knudsen, Per; Andersen, Ole; Simav, Mehmet; Yildiz, Hasan; Basic, Tomislav; Gil, Antonio J.
2016-04-01
The unique features of the Mediterranean Sea, with its large gravity variations, complex circulation, and geodynamic peculiarities have always constituted this semi-enclosed sea area as a unique geodetic, geodynamics and ocean laboratory. The main scope of the GEOMED 2 project is the collection of all available gravity, topography/bathymetry and satellite altimetry data in order to improve the representation of the marine geoid and estimate the Mean Dynamic sea surface Topography (MDT) and the circulation with higher accuracy and resolution. Within GEOMED2, the data employed are land and marine gravity data, GOCE/GRACE based Global Geopotential Models and a combination after proper validation of MISTRAL, HOMONIM and SRTM/bathymetry terrain models. In this work we present the results achieved for an inner test region spanning the Adriatic Sea area, bounded between 36o < φ < 48o and 10o < λ < 22o. Within this test region, the available terrain/bathymetry models have been evaluated in terms of their contribution to geoid modeling, the processing methodologies have been tested in terms of the provided geoid accuracy and finally some preliminary results on the MDT determination have been compiled. The aforementioned will server as the guide for the Mediterranean-wide marine geoid estimation. The processing methodology was based on the well-known remove-compute-restore method following both stochastic and spectral methods. Classic least-squares collocation (LSC) with errors has been employed, along with fast Fourier transform (FFT)-based techniques, the Least-Squares Modification of Stokes' Formula (KTH) method and windowed LSC. All methods have been evaluated against in-situ collocated GPS/Levelling geoid heights, using EGM2008 as a reference, in order to conclude on the one(s) to be used for the basin-wide geoid evaluation.
NASA Astrophysics Data System (ADS)
Bruinsma, Sean L.; Forbes, Jeffrey M.
2010-08-01
Densities derived from accelerometer measurements on the GRACE, CHAMP, and Air Force/SETA satellites near 490, 390, and 220 km, respectively, are used to elucidate global-scale characteristics of traveling atmospheric disturbances (TADs). Several characteristics elucidated in numerical simulations are confirmed in this study, namely: (1) propagation speeds increase from the lower thermosphere to the upper thermosphere; (2) propagation to the equator and even into the opposite hemisphere can occur; (3) greater attenuation of TADs occurs during daytime and at higher levels of solar activity (i.e., more wave activity during nighttime and solar minimum), presumably due to the greater influence of ion drag. In addition, we find that the occurrence of significant TAD activity emanating from the auroral regions does not reflect a clear relation with the level of planetary magnetic activity as measured by Kp. There is also evidence of waves originating in the tropics, presumably due to convective sources; to some extent this may contribute to the Kp and solar flux relationships noted above. Further elucidation of local time, season, and altitude dependences of TAD propagation characteristics may be forthcoming from density measurements from the GOCE and Swarm missions.
A combined mean dynamic topography model - DTU17cMDT
NASA Astrophysics Data System (ADS)
Knudsen, P.; Andersen, O. B.; Nielsen, K.; Maximenko, N. A.
2017-12-01
Within the ESA supported Optimal Geoid for Modelling Ocean Circulation (OGMOC) project a new geoid model have been derived. It is based on the GOCO05C setup though the newer DTU15GRA altimetric surface gravity has been used in the combination. Subsequently the model has been augmented using the EIGEN-6C4 coefficients to d/o 2160. Compared to the DTU13MSS, the DTU15MSS has been derived by including re-tracked CRYOSAT-2 altimetry also, hence, increasing its resolution. Also, some issues in the Polar regions have been solved. The new DTU17MDT has been derived using this new geoid model and the DTU15MSS mean sea surface. Compared to other geoid models the new OGMOC geoid model has been optimized to avoid striations and orange skin like features. The filtering was re-evaluated by adjusting the quasi-gaussian filter width to optimize the fit to drifter velocities. The results show that the new MDT improves the resolution of the details of the ocean circulation. Subsequently, the drifter velocities were integrated to enhance the resolution of the MDT. As a contribution to the ESA supported GOCE++ project DYCOT a special concern was devoted to the coastal areas to optimize the extrapolation towards the coast and to integrate mean sea levels at tide gauges into that process. The presentation will focus on the coastal zone when assessing the methodology, the data and the final model DTU17cMDT.
Forcing of the Coupled Ionosphere-Thermosphere (IT) System During Magnetic Storms
NASA Technical Reports Server (NTRS)
Huang, Cheryl; Huang, Yanshi; Su, Yi-Jiun; Sutton, Eric; Hairston, Marc; Coley, W. Robin; Doornbos, Eelco; Zhang, Yongliang
2014-01-01
Poynting flux shows peaks around auroral zone AND inside polar cap. Energy enters IT system at all local times in polar cap. Track-integrated flux at DMSP often peaks at polar latitudes- probably due to increased area of polar cap during storm main phases. center dot lon temperatures at DMSP show large increases in polar region at all local times; cusp and auroral zones do not show distinctively high Ti. center dot I on temperatures in the polar cap are higher than in the auroral zones during quiet times. center dot Neutral densities at GRACE and GOCE show maxima at polar latitudes without clear auroral signatures. Response is fast, minutes from onset to density peaks. center dot GUVI observations of O/N2 ratio during storms show similar response as direct measurements of ion and neutral densities, i.e. high temperatures in polar cap during prestorm quiet period, heating proceeding from polar cap to lower latitudes during storm main phase. center dot Discrepancy between maps of Poynting flux and of ion temperatures/neutral densities suggests that connection between Poynting flux and Joule heating is not simple.
Adaptive topographic mass correction for satellite gravity and gravity gradient data
NASA Astrophysics Data System (ADS)
Holzrichter, Nils; Szwillus, Wolfgang; Götze, Hans-Jürgen
2014-05-01
Subsurface modelling with gravity data includes a reliable topographic mass correction. Since decades, this mandatory step is a standard procedure. However, originally methods were developed for local terrestrial surveys. Therefore, these methods often include defaults like a limited correction area of 167 km around an observation point, resampling topography depending on the distance to the station or disregard the curvature of the earth. New satellite gravity data (e.g. GOCE) can be used for large scale lithospheric modelling with gravity data. The investigation areas can include thousands of kilometres. In addition, measurements are located in the flight height of the satellite (e.g. ~250 km for GOCE). The standard definition of the correction area and the specific grid spacing around an observation point was not developed for stations located in these heights and areas of these dimensions. This asks for a revaluation of the defaults used for topographic correction. We developed an algorithm which resamples the topography based on an adaptive approach. Instead of resampling topography depending on the distance to the station, the grids will be resampled depending on its influence at the station. Therefore, the only value the user has to define is the desired accuracy of the topographic correction. It is not necessary to define the grid spacing and a limited correction area. Furthermore, the algorithm calculates the topographic mass response with a spherical shaped polyhedral body. We show examples for local and global gravity datasets and compare the results of the topographic mass correction to existing approaches. We provide suggestions how satellite gravity and gradient data should be corrected.
Are higher degree even zonals really harmful for the LARES/LAGEOS frame-dragging experiment?
NASA Astrophysics Data System (ADS)
Renzetti, G.
2012-08-01
The low-altitude effects of LARES are examined to determined how they can impact the outcome of the hoped 1% frame-dragging measurement in the LARES-LAGEOS experiment. This analysis, based on a different approach than other studies recently appearing in the literature, shows that the spherical harmonics of the Earth gravity field with degree ℓ > 60 may represent a threat because their errors map significantly into LARES orbital disturbances compared to frame-dragging. The GIF48 model was used. It is questionable whether future Earth gravity models by GRACE and GOCE will be of sufficient accuracy.
Precise Orbit Determination Of Low Earth Satellites At AIUB Using GPS And SLR Data
NASA Astrophysics Data System (ADS)
Jaggi, A.; Bock, H.; Thaller, D.; Sosnica, K.; Meyer, U.; Baumann, C.; Dach, R.
2013-12-01
An ever increasing number of low Earth orbiting (LEO) satellites is, or will be, equipped with retro-reflectors for Satellite Laser Ranging (SLR) and on-board receivers to collect observations from Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) and the Russian GLONASS and the European Galileo systems in the future. At the Astronomical Institute of the University of Bern (AIUB) LEO precise orbit determination (POD) using either GPS or SLR data is performed for a wide range of applications for satellites at different altitudes. For this purpose the classical numerical integration techniques, as also used for dynamic orbit determination of satellites at high altitudes, are extended by pseudo-stochastic orbit modeling techniques to efficiently cope with potential force model deficiencies for satellites at low altitudes. Accuracies of better than 2 cm may be achieved by pseudo-stochastic orbit modeling for satellites at very low altitudes such as for the GPS-based POD of the Gravity field and steady-state Ocean Circulation Explorer (GOCE).
Regional gravity field modelling from GOCE observables
NASA Astrophysics Data System (ADS)
Pitoňák, Martin; Šprlák, Michal; Novák, Pavel; Tenzer, Robert
2017-01-01
In this article we discuss a regional recovery of gravity disturbances at the mean geocentric sphere approximating the Earth over the area of Central Europe from satellite gravitational gradients. For this purpose, we derive integral formulas which allow converting the gravity disturbances onto the disturbing gravitational gradients in the local north-oriented frame (LNOF). The derived formulas are free of singularities in case of r ≠ R . We then investigate three numerical approaches for solving their inverses. In the initial approach, the integral formulas are firstly modified for solving individually the near- and distant-zone contributions. While the effect of the near-zone gravitational gradients is solved as an inverse problem, the effect of the distant-zone gravitational gradients is computed by numerical integration from the global gravitational model (GGM) TIM-r4. In the second approach, we further elaborate the first scenario by reducing measured gravitational gradients for gravitational effects of topographic masses. In the third approach, we apply additional modification by reducing gravitational gradients for the reference GGM. In all approaches we determine the gravity disturbances from each of the four accurately measured gravitational gradients separately as well as from their combination. Our regional gravitational field solutions are based on the GOCE EGG_TRF_2 gravitational gradients collected within the period from November 1 2009 until January 11 2010. Obtained results are compared with EGM2008, DIR-r1, TIM-r1 and SPW-r1. The best fit, in terms of RMS (2.9 mGal), is achieved for EGM2008 while using the third approach which combine all four well-measured gravitational gradients. This is explained by the fact that a-priori information about the Earth's gravitational field up to the degree and order 180 was used.
Earth Gravitational Model 2020
NASA Astrophysics Data System (ADS)
Barnes, Daniel; Holmes, Simon; Factor, John; Ingalls, Sarah; Presicci, Manny; Beale, James
2017-04-01
The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new 'Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will likely retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and ship borne) of gravimetric data over specific geographical areas, will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors where possible, and generating improved error models that will inform the final combination with the latest satellite gravity models. Outdated data gridding procedures have been replaced with improved approaches. For EGM2020, NGA intends to extract maximum value from the proprietary data that overlaps geographically with unrestricted data, whilst also making sure to respect and honor its proprietary agreements with its data-sharing partners. Approved for Public Release, 15-564
Earth Gravitational Model 2020
NASA Astrophysics Data System (ADS)
Barnes, D.; Factor, J. K.; Holmes, S. A.; Ingalls, S.; Presicci, M. R.; Beale, J.; Fecher, T.
2015-12-01
The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new 'Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will likely retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and shipborne) of gravimetric data over specific geographical areas, will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors where possible, and generating improved error models that will inform the final combination with the latest satellite gravity models. Outdated data gridding procedures have been replaced with improved approaches. For EGM2020, NGA intends to extract maximum value from the proprietary data that overlaps geographically with unrestricted data, whilst also making sure to respect and honor its proprietary agreements with its data-sharing partners.
NASA Astrophysics Data System (ADS)
Blank, B.; van der Wal, W.; Pappa, F.; Ebbing, J.
2017-12-01
B. Blank1, H. Hu1, W. van der Wal1, F Pappa2, J. Ebbing21Delft University of Technology 2Christian-Albrechts-University of KielSince the beginning of the 2000's time-variable gravity data from GRACE has proved to be an effective method for mapping ice mass loss in Antarctica. However, Glacial Isostatic Adjustment (GIA) models are required to correct for GIA induced mass changes. While most GIA models have adopted an Earth model that only varies radially in parameters, it has long been clear that the Earth structure also varies with longitude and latitude. For this study a new global 3D GIA model has been developed within the finite element software package ABAQUS, which can be modified to operate on a spatial resolution down to 50 km locally. The model is being benchmarked against normal model models for surface loading. It will be used to investigate the effects of a 3D varying lithosphere and upper asthenosphere in Antarctica. Viscosity which will be computed from temperature estimates with laboratory based flow laws. A new 3D temperature map of the Antarctic lithosphere has been developed within ESA's GOCE+ project based on seismic data as well as on GOCE and GRACE inferred gravity gradients. Output from the GIA model with this new temperature estimates will be compared to that of 1D viscosity profiles and other recent 3D viscosity models based on seismic data. From these side to side comparisons we want to investigate the influence of the viscosity map on uplift rates and horizontal movement. Finally the results can be compared to GPS measurement to investigate the validity of all models.
NASA Astrophysics Data System (ADS)
Metivier, L.; Greff-Lefftz, M.; Panet, I.; Pajot-Métivier, G.; Caron, L.
2014-12-01
Joint inversion of the observed geoid and seismic velocities has been commonly used to constrain the viscosity profile within the mantle as well as the lateral density variations. Recent satellite measurements of the second-order derivatives of the Earth's gravity potential give new possibilities to understand these mantle properties. We use lateral density variations in the Earth's mantle based on slab history or deduced from seismic tomography. The main uncertainties are the relationship between seismic velocity and density -the so-called density/velocity scaling factor- and the variation with depth of the density contrast between the cold slabs and the surrounding mantle, introduced here as a scaling factor with respect to a constant value. The geoid, gravity and gravity gradients at the altitude of the GOCE satellite (about 255 km) are derived using geoid kernels for given viscosity depth profiles. We assume a layered mantle model with viscosity and conversion factor constant in each layer, and we fix the viscosity of the lithosphere. We perform a Monte Carlo search for the viscosity and the density/velocity scaling factor profiles within the mantle which allow to fit the observed geoid, gravity and gradients of gravity. We test a 2-layer, a 3-layer and 4-layer mantle. For each model, we compute the posterior probability distribution of the unknown parameters, and we discuss the respective contributions of the geoid, gravity and gravity gradients in the inversion. Finally, for the best fit, we present the viscosity and scaling factor profiles obtained for the lateral density variations derived from seismic velocities and for slabs sinking into the mantle.
NASA Astrophysics Data System (ADS)
Claessens, S. J.
2016-12-01
Mass density contrasts in the Earth's crust can be detected using an inversion of terrestrial or airborne gravity data. This contribution shows a technique to detect short-scale density contrasts using in-situ gravity observations in combination with a high-resolution global gravity model that includes variations in the gravity field due to topography. The technique is exemplified at various test sites using the Global Gravity Model Plus (GGMplus), which is a 7.2 arcsec resolution model of the Earth's gravitational field, covering all land masses and near-coastal areas within +/- 60° latitude. The model is a composite of GRACE and GOCE satellite observations, the EGM2008 global gravity model, and short-scale topographic gravity effects. Since variations in the Earth's gravity field due to topography are successfully modelled by GGMplus, any remaining differences with in-situ gravity observations are primarily due to mass density variations. It is shown that this technique effectively filters out large-scale density variations, and highlights short-scale near-surface density contrasts in the Earth's crust. Numerical results using recent high-density gravity surveys are presented, which indicate a strong correlation between density contrasts found and known lines of geological significance.
NASA Astrophysics Data System (ADS)
Jewess, Mike
2009-05-01
Your news article "New probe plots Earth's gravity field" (March p11) reports on the European Space Agency's Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) - a satellite that will measure the Earth's gravitational field. It describes the way that g, the acceleration of free fall at the Earth's surface, varies with latitude; this variation is great enough to require adjustment of pendulum clocks between latitudes and also the recalibration of all balances that do not directly compare one mass with a reference mass. The article also notes that the spin of the (effectively fluid) Earth causes it to bulge at the equator, a realization that goes back to Newton's Principia.
NASA Astrophysics Data System (ADS)
Magg, Manfred; Grillenbeck, Anton, , Dr.
2004-08-01
Several samples of thermal control blankets were subjected to transient thermal loads in a thermal vacuum chamber in order to study their ability to excite micro- vibrations on a carrier structure and to cause tiny centre- of-gravity shifts. The reason for this investigation was driven by the GOCE project in order to minimize micro- vibrations on-board of the spacecraft while on-orbit. The objectives of this investigation were to better understand the mechanism which may produce micro- vibrations induced by the thermal control blankets, and to identify thermal control blanket lay-ups with minimum micro-vibration activity.
NASA Astrophysics Data System (ADS)
Mulet, Sandrine; Rio, Marie-Hélène; Etienne, Hélène
2017-04-01
Strong improvements have been made in our knowledge of the surface ocean geostrophic circulation thanks to satellite observations. For instance, the use of the latest GOCE (Gravity field and steady-state Ocean Circulation Explorer) geoid model with altimetry data gives good estimate of the mean oceanic circulation at spatial scales down to 125 km. However, surface drifters are essential to resolve smaller scales, it is thus mandatory to carefully process drifter data and then to combine these different data sources. In this framework, the global 1/4° CNES-CLS13 Mean Dynamic Topography (MDT) and associated mean geostrophic currents have been computed (Rio et al, 2014). First a satellite only MDT was computed from altimetric and gravimetric data. Then, an important work was to pre-process drifter data to extract only the geostrophic component in order to be consistent with physical content of satellite only MDT. This step include estimate and remove of Ekman current and wind slippage. Finally drifters and satellite only MDT were combined. Similar approaches are used regionally to go further toward higher resolution, for instance in the Agulhas current or along the Brazilian coast. Also, a case study in the Gulf of Mexico intends to use drifters in the same way to improve weekly geostrophic current estimate.
Airborne geoid mapping of land and sea areas of East Malaysia
NASA Astrophysics Data System (ADS)
Jamil, H.; Kadir, M.; Forsberg, R.; Olesen, A.; Isa, M. N.; Rasidi, S.; Mohamed, A.; Chihat, Z.; Nielsen, E.; Majid, F.; Talib, K.; Aman, S.
2017-02-01
This paper describes the development of a new geoid-based vertical datum from airborne gravity data, by the Department of Survey and Mapping Malaysia, on land and in the South China Sea out of the coast of East Malaysia region, covering an area of about 610,000 square kilometres. More than 107,000 km flight line of airborne gravity data over land and marine areas of East Malaysia has been combined to provide a seamless land-to-sea gravity field coverage; with an estimated accuracy of better than 2.0 mGal. The iMAR-IMU processed gravity anomaly data has been used during a 2014-2016 airborne survey to extend a composite gravity solution across a number of minor gaps on selected lines, using a draping technique. The geoid computations were all done with the GRAVSOFT suite of programs from DTU-Space. EGM2008 augmented with GOCE spherical harmonic model has been used to spherical harmonic degree N = 720. The gravimetric geoid first was tied at one tide-gauge (in Kota Kinabalu, KK2019) to produce a fitted geoid, my_geoid2017_fit_kk. The fitted geoid was offset from the gravimetric geoid by +0.852 m, based on the comparison at the tide-gauge benchmark KK2019. Consequently, orthometric height at the six other tide gauge stations was computed from HGPS Lev = hGPS - Nmy_geoid2017_.t_kk. Comparison of the conventional (HLev) and GPS-levelling heights (HGPS Lev) at the six tide gauge locations indicate RMS height difference of 2.6 cm. The final gravimetric geoidwas fitted to the seven tide gauge stations and is known as my_geoid2017_fit_east. The accuracy of the gravimetric geoid is estimated to be better than 5 cm across most of East Malaysia land and marine areas
NASA Astrophysics Data System (ADS)
Novak, P.; Pitonak, M.; Sprlak, M.
2015-12-01
Recently realized gravity-dedicated satellite missions allow for measuring values of scalar, vectorial (Gravity Recovery And Climate Experiment - GRACE) and second-order tensorial (Gravity field and steady-state Ocean Circulation Explorer - GOCE) parameters of the Earth's gravitational potential. Theoretical aspects related to using moving sensors for measuring elements of a third-order gravitational tensor are currently under investigation, e.g. the gravity-dedicated satellite mission OPTIMA (OPTical Interferometry for global Mass change detection from space) should measure third-order derivatives of the Earth's gravitational potential. This contribution investigates regional recovery of the disturbing gravitational potential on the Earth's surface from satellite observations of first-, second- and third-order radial derivatives of the disturbing gravitational potential. Synthetic measurements along a satellite orbit at the altitude of 250 km are synthetized from the global gravitational model EGM2008 and polluted by the Gaussian noise. The process of downward continuation is stabilized by the Tikhonov regularization. Estimated values of the disturbing gravitational potential are compared with the same quantity synthesized directly from EGM2008. Finally, this contribution also discusses merging a regional solution into a global field as a patchwork.
NASA Astrophysics Data System (ADS)
Pitonak, Martin; Sprlak, Michal; Novak, Pavel; Tenzer, Robert
2016-04-01
Recently realized gravity-dedicated satellite missions allow for measuring values of scalar, vectorial (Gravity Recovery And Climate Experiment - GRACE) and second-order tensorial (Gravity field and steady-state Ocean Circulation Explorer - GOCE) parameters of the Earth's gravitational potential. Theoretical aspects related to using moving sensors for measuring elements of the third-order gravitational tensor are currently under investigation, e.g., the gravity field-dedicated satellite mission OPTIMA (OPTical Interferometry for global Mass change detection from space) should measure third-order derivatives of the Earth's gravitational potential. This contribution investigates regional recovery of the disturbing gravitational potential on the Earth's surface from satellite and aerial observations of the first-, second- and third-order radial derivatives of the disturbing gravitational potential. Synthetic measurements along a satellite orbit at the altitude of 250 km and along an aircraft track at the altitude of 10 km are synthetized from the global gravitational model EGM2008 and polluted by the Gaussian noise. The process of downward continuation is stabilized by the Tikhonov regularization. Estimated values of the disturbing gravitational potential are compared with the same quantity synthesized directly from EGM2008.
NASA Astrophysics Data System (ADS)
Foerster, M.; Doornbos, E.; Haaland, S.
2016-12-01
Solar wind and IMF interaction with the geomagnetic field sets up a large-scale plasma circulation in the Earth's magnetosphere and the magnetically tightly connected ionosphere. The ionospheric ExB ion drift at polar latitudes accelerates the neutral gas as a nondivergent momentum source primarily in force balance with pressure gradients, while the neutral upper thermosphere circulation is essentially modified by apparent forces due to Earth's rotation (Coriolis and centrifugal forces) as well as advection and viscous forces. The apparent forces affect the dawn and dusk side asymmetrically, favouring a large dusk-side neutral wind vortex, while the non-dipolar portions of the Earth's magnetic field constitute significant hemispheric differences in magnetic flux and field configurations that lead to essential interhemispheric differences of the ion-neutral interaction. We present statistical studies of both the high-latitude ionospheric convection and the upper thermospheric circulation patterns based on measurements of the electron drift instrument (EDI) on board the Cluster satellites and by the accelerometer on board the CHAMP, GOCE, and Swarm spacecraft, respectively.
Space Geodesy: The Cross-Disciplinary Earth science (Vening Meinesz Medal Lecture)
NASA Astrophysics Data System (ADS)
Shum, C. K.
2012-04-01
Geodesy during the onset of the 21st Century is evolving into a transformative cross-disciplinary Earth science field. The pioneers before or after the discipline Geodesy was defined include Galileo, Descartes, Kepler, Newton, Euler, Bernoulli, Kant, Laplace, Airy, Kelvin, Jeffreys, Chandler, Meinesz, Kaula, and others. The complicated dynamic processes of the Earth system manifested by interactions between the solid Earth and its fluid layers, including ocean, atmosphere, cryosphere and hydrosphere, and their feedbacks are linked with scientific problems such as global sea-level rise resulting from natural and anthropogenic climate change. Advances in the precision and stability of geodetic and fundamental instrumentations, including clocks, satellite or quasar tracking sensors, altimetry and lidars, synthetic aperture radar interferometry (InSAR), InSAR altimetry, gravimetry and gradiometry, have enabled accentuate and transformative progress in cross-disciplinary Earth sciences. In particular, advances in the measurement of the gravity with modern free-fall methods have reached accuracies of 10-9 g (~1 μGal or 10 nm/s2) or better, allowing accurate measurements of height changes at ~3 mm relative to the Earth's center of mass, and mass transports within the Earth interior or its geophysical fluids, enabling global quantifications of climate-change signals. These contemporary space geodetic and in situ sensors include, but not limited to, satellite radar and laser altimetry/lidars, GNSS/SLR/VLBI/DORIS, InSAR, spaceborne gravimetry from GRACE (Gravity Recovery And Climate Experiment twin-satellite mission) and gradiometry from GOCE (Global Ocean Circulation Experiment), tide gauges, and hydrographic data (XBT/MBT/Argo). The 2007 Intergovernmental Panel for Climate Change (IPCC) study, the Fourth Assessment Report (AR4), substantially narrowed the discrepancy between observation and the known geophysical causes of sea-level rise, but significant uncertainties remain, notably in the discrepancies of contributions from the ice-reservoirs (ice-sheet and mountain glaciers/ice caps) and our knowledge in the solid Earth glacial isostatic adjustment (GIA), to the present-day and 20th Century global sea-level rise. Here we report our use of contemporary space geodetic observations and novel methodologies to address a few of the open Earth science questions, including the potential quantifications of the major geophysical contributions to or causing present-day global sea-level rise, and the subsequent narrowing of the current sea-level budget discrepancy.
NASA Astrophysics Data System (ADS)
Martinec, Zdeněk; Fullea, Javier
2015-03-01
We aim to interpret the vertical gravity and vertical gravity gradient of the GOCE-GRACE combined gravity model over the southeastern part of the Congo basin to refine the published model of sedimentary rock cover. We use the GOCO03S gravity model and evaluate its spherical harmonic representation at or near the Earth's surface. In this case, the gradiometry signals are enhanced as compared to the original measured GOCE gradients at satellite height and better emphasize the spatial pattern of sedimentary geology. To avoid aliasing, the omission error of the modelled gravity induced by the sedimentary rocks is adjusted to that of the GOCO03S gravity model. The mass-density Green's functions derived for the a priori structure of the sediments show a slightly greater sensitivity to the GOCO03S vertical gravity gradient than to the vertical gravity. Hence, the refinement of the sedimentary model is carried out for the vertical gravity gradient over the basin, such that a few anomalous values of the GOCO03S-derived vertical gravity gradient are adjusted by refining the model. We apply the 5-parameter Helmert's transformation, defined by 2 translations, 1 rotation and 2 scale parameters that are searched for by the steepest descent method. The refined sedimentary model is only slightly changed with respect to the original map, but it significantly improves the fit of the vertical gravity and vertical gravity gradient over the basin. However, there are still spatial features in the gravity and gradiometric data that remain unfitted by the refined model. These may be due to lateral density variation that is not contained in the model, a density contrast at the Moho discontinuity, lithospheric density stratifications or mantle convection. In a second step, the refined sedimentary model is used to find the vertical density stratification of sedimentary rocks. Although the gravity data can be interpreted by a constant sedimentary density, such a model does not correspond to the gravitational compaction of sedimentary rocks. Therefore, the density model is extended by including a linear increase in density with depth. Subsequent L2 and L∞ norm minimization procedures are applied to find the density parameters by adjusting both the vertical gravity and the vertical gravity gradient. We found that including the vertical gravity gradient in the interpretation of the GOCO03S-derived data reduces the non-uniqueness of the inverse gradiometric problem for density determination. The density structure of the sedimentary formations that provide the optimum predictions of the GOCO03S-derived gravity and vertical gradient of gravity consists of a surface density contrast with respect to surrounding rocks of 0.24-0.28 g/cm3 and its decrease with depth of 0.05-0.25 g/cm3 per 10 km. Moreover, the case where the sedimentary rocks are gravitationally completely compacted in the deepest parts of the basin is supported by L∞ norm minimization. However, this minimization also allows a remaining density contrast at the deepest parts of the sedimentary basin of about 0.1 g/cm3.
NASA Astrophysics Data System (ADS)
Bain, V.; Milan, D.; Preciso, E.; Gaume, E.
2009-04-01
On the 17th, 19th and 23rd of July 2007, a series of local thunderstorms induced flash floods in the upper part of the South Tyne river in Northumberland, a rural area located near the border between England and Scotland. These events led to moderate damages in the villages and losses of livestock in local farms. They were shadowed in comparison to the widespread lowland floods that occurred throughout the UK during the same period but were nevertheless extreme events for the region. One of the affected streams, the Thinhope Burn, has been surveyed by the University of Gloucestershire during recent years. It is an active river from a geomorphological point of view. A survey conducted after the 2007 flood revealed that many of the boulders along the banks of the river, which had been deposited 50 to 100 years before, had been displaced, indicating a high return period for the flood (see EGU abstract EGU2008-A-04713). A complementary survey was conducted in July 2008 with the objective of gathering information on the discharges, the rainfall amounts and the active runoff processes. 14 cross-sections were surveyed, pictures were collected enabling a validation of peak discharge estimates, 5 witnesses were interviewed and additional rainfall data and geomorphological evidence were collected. This survey revealed that the peak discharges exceeded 5 m3/s/km2 in the most affected areas. Unfortunately, no rainfall measurements are available that would enable further analysis, including the computation of runoff rates. Nevertheless, witness accounts and field observations give a good insight into the hydrological processes indicating a significant initial storage capacity of the peat layer covering the affected watersheds. Concerning the boulders, the field observations suggest surprising and unexplained transport processes. Blocks of up to one meter diameter were displaced over short distances and deposited on the river banks without any sign of established debris flow, as if short debris pulses occurred along the river course. This work is conducted within the European research project HYDRATE (Contract GOCE 037024).
Toward an integrated storm surge application: ESA Storm Surge project
NASA Astrophysics Data System (ADS)
Lee, Boram; Donlon, Craig; Arino, Olivier
2010-05-01
Storm surges and their associated coastal inundation are major coastal marine hazards, both in tropical and extra-tropical areas. As sea level rises due to climate change, the impact of storm surges and associated extreme flooding may increase in low-lying countries and harbour cities. Of the 33 world cities predicted to have at least 8 million people by 2015, at least 21 of them are coastal including 8 of the 10 largest. They are highly vulnerable to coastal hazards including storm surges. Coastal inundation forecasting and warning systems depend on the crosscutting cooperation of different scientific disciplines and user communities. An integrated approach to storm surge, wave, sea-level and flood forecasting offers an optimal strategy for building improved operational forecasts and warnings capability for coastal inundation. The Earth Observation (EO) information from satellites has demonstrated high potential to enhanced coastal hazard monitoring, analysis, and forecasting; the GOCE geoid data can help calculating accurate positions of tide gauge stations within the GLOSS network. ASAR images has demonstrated usefulness in analysing hydrological situation in coastal zones with timely manner, when hazardous events occur. Wind speed and direction, which is the key parameters for storm surge forecasting and hindcasting, can be derived by using scatterometer data. The current issue is, although great deal of useful EO information and application tools exist, that sufficient user information on EO data availability is missing and that easy access supported by user applications and documentation is highly required. Clear documentation on the user requirements in support of improved storm surge forecasting and risk assessment is also needed at the present. The paper primarily addresses the requirements for data, models/technologies, and operational skills, based on the results from the recent Scientific and Technical Symposium on Storm Surges (www.surgesymposium.org, organized by the WMO-IOC Joint technical Commission for Oceanography and Marine Meteorology, JCOMM) and following activities, that have been supported by the Intergovernmental Oceanographic Commission (IOC) of UNESCO through JCOMM. The paper also reviews the capabilities of storm surge models, and current status in using Earth Observation (EO) information for advancing storm surge application tools, and further, for improving operational forecasts and warning capability for coastal inundation. In this context, the plans and expected results of the ESA Storm Surge Project (2010-2011) will be introduced.
On-ground casualty risk reduction by structural design for demise
NASA Astrophysics Data System (ADS)
Lemmens, Stijn; Funke, Quirin; Krag, Holger
2015-06-01
In recent years, awareness concerning the on-ground risk posed by un-controlled re-entering space systems has increased. On average over the past decade, an object with mass above 800 kg re-enters every week from which only a few, e.g. ESA's GOCE in 2013 and NASA's UARS in 2011, appeared prominent in international media. Space agencies and nations have discussed requirements to limit the on-ground risk for future missions. To meet the requirements, the amount of debris falling back on Earth has to be limited in number, mass and size. Design for demise (D4D) refers to all measures taken in the design of a space object to increase the potential for demise of the object and its components during re-entry. SCARAB (Spacecraft Atmospheric Re-entry and Break-Up) is ESA's high-fidelity tool which analyses the thermal and structural effects of atmospheric re-entry on spacecraft with a finite-element approach. For this study, a model of a representative satellite is developed in SCARAB to serve as test-bed for D4D analyses on a structural level. The model is used as starting point for different D4D approaches based on increasing the exposure of the satellite components to the aero-thermal environment, as a way to speed up the demise. Statistical bootstrapping is applied to the resulting on-ground fragment lists in order to compare the different re-entry scenarios and to determine the uncertainties of the results. Moreover, the bootstrap results can be used to analyse the casualty risk estimator from a theoretical point of view. The risk reductions for the analysed D4D techniques are presented with respect to the reference scenario for the modelled representative satellite.
On-Ground Casualty Risk Reduction by Structural Design for Demise
NASA Astrophysics Data System (ADS)
Lemmens, Stijn; Krag, Holger; Funke, Quirin
In recent years, awareness concerning the risk posed by un-controlled re-entering spacecraft on ground has increased. Some re-entry events such as ESA's GOCE in 2013 and NASA's UARS appeared prominent in international media. Space agencies and nations, in cooperation within the Inter-Agency Space Debris Coordination Committee (IADC), have established a requirements to limited the on-ground risk for future missions. To meet the requirements, the amount of debris falling back on Earth has to be limited in number, mass and size. Design for demise (D4D) refers to all measures taken in the design of a space object to increase the potential for demise of the object and its components during re-entry. SCARAB (Spacecraft Atmospheric Re-entry and Break-Up) is ESA's high-fidelity tool which analyses the thermal and structural effects of atmospheric re-entry on spacecraft in a finite-element approach. For this study, a model of a representative satellite is developed in Scarab to serve as test-bed for D4D analysis on a structural level. The model is used as starting point for different D4D approaches based on increasing the exposure of the satellite components to the aero-thermal environment, as a way to speed up the demise. Statistical bootstrapping is applied to the resulting on-ground fragment lists in order to compare the different re-entry scenarios and to determine the uncertainties of the results. Moreover, the bootstrap results can be used to analyse the casualty risk estimator from a theoretical point of view. The risk reductions for the analysed D4D techniques are presented w.r.t. the reference scenario for the modelled representative satellite.
Surface topography estimated by inversion of satellite gravity gradiometry observations
NASA Astrophysics Data System (ADS)
Ramillien, Guillaume
2015-04-01
An integration of mass elements is presented for evaluating the six components of the 2-order gravity tensor (i.e., second derivatives of the Newtonian mass integral for the gravitational potential) created by an uneven sphere topography consisting of juxtaposed vertical prisms. The method is based on Legendre polynomial series with the originality of taking elastic compensation of the topography by the Earth's surface into account. The speed of computation of the polynomial series increases logically with the observing altitude from the source of anomaly. Such a forward modelling can be easily used for reduction of observed gravity gradient anomalies by the effects of any spherical interface of density. Moreover, an iterative least-square inversion of the observed gravity tensor values Γαβ is proposed to estimate a regional set of topographic heights. Several tests of recovery have been made by considering simulated gradiometry anomaly data, and for varying satellite altitudes and a priori levels of accuracy. In the case of GOCE-type gradiometry anomalies measured at an altitude of ~300 km, the search converges down to a stable and smooth topography after 20-30 iterations while the final r.m.s. error is ~100 m. The possibility of cumulating satellite information from different orbit geometries is also examined for improving the prediction.
High stability laser for next generation gravity missions
NASA Astrophysics Data System (ADS)
Nicklaus, K.; Herding, M.; Wang, X.; Beller, N.; Fitzau, O.; Giesberts, M.; Herper, M.; Barwood, G. P.; Williams, R. A.; Gill, P.; Koegel, H.; Webster, S. A.; Gohlke, M.
2017-11-01
With GRACE (launched 2002) and GOCE (launched 2009) two very successful missions to measure earth's gravity field have been in orbit, both leading to a large number of publications. For a potential Next Generation Gravity Mission (NGGM) from ESA a satellite-to-satellite tracking (SST) scheme, similar to GRACE is under discussion, with a laser ranging interferometer instead of a Ka-Band link to enable much lower measurement noise. Of key importance for such a laser interferometer is a single frequency laser source with a linewidth <10 kHz and extremely low frequency noise down to 40 Hz / √Hz in the measurement frequency band of 0.1 mHz to 1 Hz, which is about one order of magnitude more demanding than LISA. On GRACE FO a laser ranging interferometer (LRI) will fly as a demonstrator. The LRI is a joint development between USA (JPL,NASA) and Germany(GFZ,DLR). In this collaboration the JPL contributions are the instrument electronics, the reference cavity and the single frequency laser, while STI as the German industry prime is responsible for the optical bench and the retroreflector. In preparation of NGGM an all European instrument development is the goal.
SAPS simulation with GITM/UCLA-RCM coupled model
NASA Astrophysics Data System (ADS)
Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.
2017-12-01
Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.
NASA Astrophysics Data System (ADS)
2010-10-01
The troubles flowing from BP's Macondo oil well in Gulf of Mexico have focused attention on the technological demands of safe deep-water drilling. European Space Agency research presented in a Space and Energy Seminar in August offers spin-off technologies that could support oil exploration and production in extreme environments, from corrosion control to better robotics. NASA and the European Space Agency have embarked on a joint programme to study the chemical composition of the atmosphere of Mars from 2016. They have just announced the providers of five scientific instruments for the first mission, including two consortia in which the Open University has a major role.
CryoSat Plus For Oceans: an ESA Project for CryoSat-2 Data Exploitation Over Ocean
NASA Astrophysics Data System (ADS)
Benveniste, J.; Cotton, D.; Clarizia, M.; Roca, M.; Gommenginger, C. P.; Naeije, M. C.; Labroue, S.; Picot, N.; Fernandes, J.; Andersen, O. B.; Cancet, M.; Dinardo, S.; Lucas, B. M.
2012-12-01
The ESA CryoSat-2 mission is the first space mission to carry a space-borne radar altimeter that is able to operate in the conventional pulsewidth-limited (LRM) mode and in the novel Synthetic Aperture Radar (SAR) mode. Although the prime objective of the Cryosat-2 mission is dedicated to monitoring land and marine ice, the SAR mode capability of the Cryosat-2 SIRAL altimeter also presents the possibility of demonstrating significant potential benefits of SAR altimetry for ocean applications, based on expected performance enhancements which include improved range precision and finer along track spatial resolution. With this scope in mind, the "CryoSat Plus for Oceans" (CP4O) Project, dedicated to the exploitation of CryoSat-2 Data over ocean, supported by the ESA STSE (Support To Science Element) programme, brings together an expert European consortium comprising: DTU Space, isardSAT, National Oceanography Centre , Noveltis, SatOC, Starlab, TU Delft, the University of Porto and CLS (supported by CNES),. The objectives of CP4O are: - to build a sound scientific basis for new scientific and operational applications of Cryosat-2 data over the open ocean, polar ocean, coastal seas and for sea-floor mapping. - to generate and evaluate new methods and products that will enable the full exploitation of the capabilities of the Cryosat-2 SIRAL altimeter , and extend their application beyond the initial mission objectives. - to ensure that the scientific return of the Cryosat-2 mission is maximised. In particular four themes will be addressed: -Open Ocean Altimetry: Combining GOCE Geoid Model with CryoSat Oceanographic LRM Products for the retrieval of CryoSat MSS/MDT model over open ocean surfaces and for analysis of mesoscale and large scale prominent open ocean features. Under this priority the project will also foster the exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to detect short spatial scale open ocean features. -High Resolution Polar Ocean Altimetry: Combination of GOCE Geoid Model with CryoSat Oceanographic SAR Products over polar oceans for the retrieval of CryoSat MSS/MDT and currents circulations system improving the polar tides models and studying the coupling between blowing wind and current pattern. -High Resolution Coastal Zone Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to get the radar altimetry closer to the shore exploiting the SARIn mode for the discrimination of off-nadir land targets (e.g. steep cliffs) in the radar footprint from nadir sea return. -High Resolution Sea-Floor Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to resolve the weak short-wavelength sea surface signals caused by sea-floor topography elements and to map uncharted sea-mounts/trenches. One of the first project activities is the consolidation of preliminary scientific requirements for the four themes under investigation. This paper will present the CP4O project content and objectives and will address the first initial results from the on-going work to define the scientific requirements.
Update of the DTM thermosphere model in the framework of the H2020 project `SWAMI'
NASA Astrophysics Data System (ADS)
Bruinsma, S.; Jackson, D.; Stolle, C.; Negrin, S.
2017-12-01
In the framework of the H2020 project SWAMI (Space Weather Atmosphere Model and Indices), which is expected to start in January 2018, the CIRA thermosphere specification model DTM2013 will be improved through the combination of assimilating more density data to drive down remaining biases and a new high cadence kp geomagnetic index in order to improve storm-time performance. Five more years of GRACE high-resolution densities from 2012-2016, densities from the last year of the GOCE mission, Swarm mean densities, and mean densities from 2010-2017 inferred from the geodetic satellites at about 800 km are available now. The DTM2013 model will be compared with the new density data in order to detect possible systematic errors or other kinds of deficiencies and a first analysis will be presented. Also, a more detailed analysis of model performance under storm conditions will be provided, which will then be the benchmark to quantify model improvement expected with the higher cadence kp indices. In the SWAMI project, the DTM model will be coupled in the 120-160 km altitude region to the Met Office Unified Model in order to create a whole atmosphere model. It can be used for launch operations, re-entry computations, orbit prediction, and aeronomy and space weather studies. The project objectives and time line will be given.
NASA Astrophysics Data System (ADS)
Forsberg, R.; Olesen, A. V.; Ferraccioli, F.; Jordan, T. A.; Matsuoka, K.
2016-12-01
Major airborne geophysical surveys have recently mapped large unexplored regions in the interior of East Antarctica, in a Danish-UK-Norwegian cooperation. Long-range aerogeophysics data have been collected both over the Recovery Lakes region (2012/13), as well as around the Pole (2015/16). The primary purpose of these campaigns was to map gravity to fill-in data voids in global gravity field models and augment results from the European Space Agency GOCE gravity field satellite mission. Additionally magnetic, ice-penetrating radar and lidar data are used to explore and understand the subglacial topography and geological setting, providing an improved foundation for ice sheet modeling. The most recent ESA-sponsored Polar Gap project used a BAS Twin-Otter aircraft equipped with both spring gravimeter and IMU gravity sensors, magnetometers, ice penetrating radar over the essentially unmapped regions of the GOCE polar gap. Additional detailed flights over the subglacial Recovery Lakes region, followed up earlier 2013 flights over this region. The operations took place from two field camps (near Recovery Lakes and Thiel Mountains), as well as from the Amundsen-Scott South Pole station, thanks to a special arrangement with NSF. In addition to the airborne geophysics program, data with an ESA Ku-band radar were also acquired, in support of the CryoSat-2 mission, and scanning lidar collected across the polar gap, beyond the coverage of IceSat. In the talk we outline the Antarctic field operations, and show first results of the campaign, including performance of the gravity sensors, with comparison to limited existing data in the region (e.g., AGAP, IceBridge), as well as examples of lidar, magnetics and radar data. Significant new features detected from the geophysical data includes an extensive subglacial valley system between the Pole and the Filchner-Ronne ice shelf region, as well as extensive subglacial mountains, both consistent with observed ice stream patterns in the region. New data over the Recovery Lakes confirm the tectonic constraints on the lake system, and also hightlight the importantance of relatively dense flight tracks to constrain local subglacial hydrology.
NASA Astrophysics Data System (ADS)
Álvarez, Orlando; Gimenez, Mario; Folguera, Andres; Spagnotto, Silvana; Bustos, Emilce; Baez, Walter; Braitenberg, Carla
2015-11-01
Satellite-only gravity measurements and those integrated with terrestrial observations provide global gravity field models of unprecedented precision and spatial resolution, allowing the analysis of the lithospheric structure. We used the model EGM2008 (Earth Gravitational Model) to calculate the gravity anomaly and the vertical gravity gradient in the South Central Andes region, correcting these quantities by the topographic effect. Both quantities show a spatial relationship between the projected subduction of the Copiapó aseismic ridge (located at about 27°-30° S), its potential deformational effects in the overriding plate, and the Ojos del Salado-San Buenaventura volcanic lineament. This volcanic lineament constitutes a projection of the volcanic arc toward the retroarc zone, whose origin and development were not clearly understood. The analysis of the gravity anomalies, at the extrapolated zone of the Copiapó ridge beneath the continent, shows a change in the general NNE-trend of the Andean structures to an ENE-direction coincident with the area of the Ojos del Salado-San Buenaventura volcanic lineament. This anomalous pattern over the upper plate is interpreted to be linked with the subduction of the Copiapó ridge. We explore the relation between deformational effects and volcanism at the northern Chilean-Pampean flat slab and the collision of the Copiapó ridge, on the basis of the Moho geometry and elastic thicknesses calculated from the new satellite GOCE data. Neotectonic deformations interpreted in previous works associated with volcanic eruptions along the Ojos del Salado-San Buenaventura volcanic lineament is interpreted as caused by crustal doming, imprinted by the subduction of the Copiapó ridge, evidenced by crustal thickening at the sites of ridge inception along the trench. Finally, we propose that the Copiapó ridge could have controlled the northern edge of the Chilean-Pampean flat slab, due to higher buoyancy, similarly to the control that the Juan Fernandez ridge exerts in the geometry of the flat slab further south.
NASA Astrophysics Data System (ADS)
Barantsrva, O.; Artemieva, I. M.; Thybo, H.
2015-12-01
We present the results of gravity modeling for the North Atlantic region based on interpretation of GOCE gravity satellite data. First, to separate the gravity signal caused by density anomalies within the crust and the upper mantle, we subtract the lower harmonics in the gravity field, which are presumably caused by deep density structure of the Earth (the core and the lower mantle). Next, the gravity effect of the upper mantle is calculated by subtracting the gravity effect of the crustal model. Our "basic model" is constrained by a recent regional seismic model EUNAseis for the crustal structure (Artemieva and Thybo, 2013); for bathymetry and topography we use a global ETOPO1 model by NOAA. We test sensitivity of the results to different input parameters, such as bathymetry, crustal structure, and gravity field. For bathymetry, we additionally use GEBCO data; for crustal correction - a global model CRUST 1.0 (Laske, 2013); for gravity - EGM2008 (Pavlis, 2012). Sensitivity analysis shows that uncertainty in the crustal structure produces the largest deviation from "the basic model". Use of different bathymetry data has little effect on the final results, comparable to the interpolation error. The difference in mantle residual gravity models based on GOCE and EMG2008 gravity data is 5-10 mGal. The results based on two crustal models have a similar pattern, but differ significantly in amplitude (ca. 250 mGal) for the Greenland-Faroe Ridge. The results demonstrate the presence of a strong gravity and density heterogeneity in the upper mantle in the North Atlantic region. A number of mantle residual gravity anomalies are robust features, independent of the choice of model parameters. This include (i) a sharp contrast at the continent-ocean transition, (ii) positive mantle gravity anomalies associated with continental fragments (microcontinents) in the North Atlantic ocean; (iii) negative mantle gravity anomalies which mark regions with anomalous oceanic mantle and the Mid-Atlantic Ridge. To understand better a complex geodynamics mosaic in the region, we compare our results with regional geochemical data (Korenaga and Klemen, 2000), and find that residual mantle gravity anomalies are well correlated with anomalies in epsilon-Nd and iron-depletion.
e.motion - European Initiatives for a Future Gravity Field Mission
NASA Astrophysics Data System (ADS)
Gruber, T.
2017-12-01
Since 2010 a large team of European scientists, with the support of technological and industrial partners, is preparing proposals for new gravity field missions as follow-up to GRACE, GOCE and GRACE-FO. The main goal of the proposed mission concepts is the long term observation of the time variable gravity field with significantly increased spatial and temporal resolution as it can be performed nowadays with GRACE or in the near future with GRACE Follow-On. These observations are crucial for long term monitoring of mass variations in the system Earth in order to improve our knowledge about the global and regional water cycle as well as about processes of the solid Earth. Starting from the existing concepts of single pair mission like GRACE and GRACE-FO, sensitivity, spatial and temporal resolution shall be increased, such that also smaller scale time variable signals can be resolved, which cannot be detected with the current techniques. For such a mission concept new and significantly improved observation techniques are needed. This concerns in particular the measurement of inter-satellite distances, the observation of non-gravitational accelerations, the configuration of the satellite orbit and most important the implementation of constellation of satellite pairs. All in all three proposals have been prepared by the e.motion team specifying in detail the mission design and the performance in terms of science applications. Starting with a single-pair pendulum mission, which was proposed for ESA's Earth Explorer 8 call (EE8), more recently a double-pair Bender-type mission was proposed for the ESA's EE9 call. In between several studies on European (DLR and ESA) and inter-agency level (ESA-NASA) have been performed. The presentation provides a summary about all these initiatives, derives some conclusions which can be drawn from the mission proposals and study results and gives an outlook about future initiatives for gravity field missions in Europe.
Density interface topography recovered by inversion of satellite gravity gradiometry observations
NASA Astrophysics Data System (ADS)
Ramillien, G. L.
2017-08-01
A radial integration of spherical mass elements (i.e. tesseroids) is presented for evaluating the six components of the second-order gravity gradient (i.e. second derivatives of the Newtonian mass integral for the gravitational potential) created by an uneven spherical topography consisting of juxtaposed vertical prisms. The method uses Legendre polynomial series and takes elastic compensation of the topography by the Earth's surface into account. The speed of computation of the polynomial series increases logically with the observing altitude from the source of anomaly. Such a forward modelling can be easily applied for reduction of observed gravity gradient anomalies by the effects of any spherical interface of density. An iterative least-squares inversion of measured gravity gradient coefficients is also proposed to estimate a regional set of juxtaposed topographic heights. Several tests of recovery have been made by considering simulated gradients created by idealistic conical and irregular Great Meteor seamount topographies, and for varying satellite altitudes and testing different levels of uncertainty. In the case of gravity gradients measured at a GOCE-type altitude of ˜ 300 km, the search converges down to a stable but smooth topography after 10-15 iterations, while the final root-mean-square error is ˜ 100 m that represents only 2 % of the seamount amplitude. This recovery error decreases with the altitude of the gravity gradient observations by revealing more topographic details in the region of survey.
Alenia Spazio: Space Programs for Solar System Exploration .
NASA Astrophysics Data System (ADS)
Ferri, A.
Alenia Spazio is the major Italian space industry and one of the largest in Europe, with 2,400 highly skilled employees and 16,000 square meters of clean rooms and laboratories for advanced technological research that are among the most modern and well-equipped in Europe. The company has wide experience in the design, development, assembly, integration, verification and testing of complete space systems: satellites for telecommunications and navigation, remote sensing, meteorology and scientific applications; manned systems and space infrastructures; launch, transport and re-entry systems, and control centres. Alenia Spazio has contributed to the construction of over 200 satellites and taken part in the most important national and international space programmes, from the International Space Station to the new European global navigation system Galileo. Focusing on Solar System exploration, in the last 10 years the Company took part, with different roles, to the major European and also NASA missions in the field: Rosetta, Mars Express, Cassini; will soon take part in Venus Express, and is planning the future with Bepi Colombo, Solar Orbiter, GAIA and Exomars. In this paper, as in the presentation, a very important Earth Observation mission is also presented: GOCE. All in all, the Earth is by all means part of the Solar system as well and we like to see it as a planet to be explored.
Satellite gravity field derivatives for identifying geological boundaries.
NASA Astrophysics Data System (ADS)
Alvarez, O.; Gimenez, M.; Braitenberg, C.; Folguera, A.
2012-04-01
The Pampean flat slab zone developed in the last 17 Ma between 27° and 33°S, and has denuded an intricate collage of crustal blocks amalgamated during the Pampean, Famatinian and San Rafael deformational stages, that is far of being completely understood. For potential field studies these amalgamations have the effect of defining important compositional and density heterogeneities. Geophysical data from different studies show a sharp boundary between the two adjacent and contrasting crusts of Pampia and the Cuyania terrane. Recent aeromagnetic surveys have inferred a mafic and ultramafic belt interpreted as a buried ophiolitic suite hosted in the corresponding suture. This boundary coincides locally with basement exposures of high to medium grade metamorphic rocks developed in close association with the Famatinian orogen of Early to Middle Ordovician age. Lower crustal rocks are exposed along this first order crustal discontinuity. The Río de la Plata basement crops out from southern Uruguay to eastern-center Argentina with an approximate surface of 20,000 km2. Oldest rocks have been dated in 2,200 and 1,700 Ma, indicating that they constituted a different block to Pampia. The boundary between Pampia and the Rio de la Plata craton is not exposed. However, a strong gravimetric anomaly identified in the central part of the foothills of the Sierras de Córdoba indicates a first order crustal discontinuity that has been related to their collision in Neoproterozoic times. This work focuses on the determination of mass heterogeneities over the Pampean flat slab zone using gravity anomaly and vertical gravity gradient, with the aim to determine discontinuities in the pattern of terrain amalgamation that conformed the basement. Satellite gravimetry is highly sensitive to these variations. Recent satellite missions, (CHAMP, GRACE, and GOCE) have introduced an extraordinary improvement in the global mapping of the gravity field. We control the quality of the terrestrial data entering the EGM2008 by a comparison analysis with the satellite only gravitational model of GOCE up to degree N=250. Using the global model EGM2008, the vertical gravity gradient and the gravity anomaly for South Central Andes are calculated. We correct the observations for the topographic effect using tesseroids by using a 1-arc minute global relief model of earth's surface. Results are compared to a schematic geological map of the South Central Andes region, which includes main geological features with regional dimensions presumably accompanied by crustal density variations. We clearly depict the geological structures and delineation of significant terrains such as Pampia, Cuyania, and Chilenia terranes. Of great interest is the contact between the Rio de la Plata craton and the Pampia Terrain, a boundary that has not been clearly defined till now. Our work aims to highlight the potential of this new tool of satellite gravimetry, with the addition of topographic correction, to achieve tectonic interpretation of medium to long wavelength of a determined study region. We demonstrate that the new gravity fields can be used for identifying geological boundaries related to density differences, in a regional dimension and thus are a new useful tool in geophysical exploration.
NASA Astrophysics Data System (ADS)
Hirt, Christian; Rexer, Moritz; Scheinert, Mirko; Pail, Roland; Claessens, Sten; Holmes, Simon
2016-02-01
The current high-degree global geopotential models EGM2008 and EIGEN-6C4 resolve gravity field structures to ˜ 10 km spatial scales over most parts of the of Earth's surface. However, a notable exception is continental Antarctica, where the gravity information in these and other recent models is based on satellite gravimetry observations only, and thus limited to about ˜ 80-120 km spatial scales. Here, we present a new degree-2190 global gravity model (GGM) that for the first time improves the spatial resolution of the gravity field over the whole of continental Antarctica to ˜ 10 km spatial scales. The new model called SatGravRET2014 is a combination of recent Gravity Recovery and Climate Experiment (GRACE) and Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite gravimetry with gravitational signals derived from the 2013 Bedmap2 topography/ice thickness/bedrock model with gravity forward modelling in ellipsoidal approximation. Bedmap2 is a significantly improved description of the topographic mass distribution over the Antarctic region based on a multitude of topographic surveys, and a well-suited source for modelling short-scale gravity signals as we show in our study. We describe the development of SatGravRET2014 which entirely relies on spherical harmonic modelling techniques. Details are provided on the least-squares combination procedures and on the conversion of topography to implied gravitational potential. The main outcome of our work is the SatGravRET2014 spherical harmonic series expansion to degree 2190, and derived high-resolution grids of 3D-synthesized gravity and quasigeoid effects over the whole of Antarctica. For validation, six data sets from the IAG Subcommission 2.4f "Gravity and Geoid in Antarctica" (AntGG) database were used comprising a total of 1,092,981 airborne gravimetric observations. All subsets consistently show that the Bedmap2-based short-scale gravity modelling improves the agreement over satellite-only data considerably (improvement rates ranging between 9 and 75 % with standard deviations from residuals between SatGravRET2014 and AntGG gravity ranging between 8 and 25 mGal). For comparison purposes, a degree-2190 GGM was generated based on the year-2001 Bedmap1 (using the ETOPO1 topography) instead of 2013 Bedmap2 topography product. Comparison of both GGMs against AntGG consistently reveals a closer fit over all test areas when Bedmap2 is used. This experiment provides evidence for clear improvements in Bedmap2 topographic information over Bedmap1 at spatial scales of ˜ 80-10 km, obtained from independent gravity data used as validation tool. As a general conclusion, our modelling effort fills—in approximation—some gaps in short-scale gravity knowledge over Antarctica and demonstrates the value of the Bedmap2 topography data for short-scale gravity refinement in GGMs. SatGravRET2014 can be used, e.g. as a reference model for future gravity modelling efforts over Antarctica, e.g. as foundation for a combination with the AntGG data set to obtain further improved gravity information.
Simplified Ion Thruster Xenon Feed System for NASA Science Missions
NASA Technical Reports Server (NTRS)
Snyder, John Steven; Randolph, Thomas M.; Hofer, Richard R.; Goebel, Dan M.
2009-01-01
The successful implementation of ion thruster technology on the Deep Space 1 technology demonstration mission paved the way for its first use on the Dawn science mission, which launched in September 2007. Both Deep Space 1 and Dawn used a "bang-bang" xenon feed system which has proven to be highly successful. This type of feed system, however, is complex with many parts and requires a significant amount of engineering work for architecture changes. A simplified feed system, with fewer parts and less engineering work for architecture changes, is desirable to reduce the feed system cost to future missions. An attractive new path for ion thruster feed systems is based on new components developed by industry in support of commercial applications of electric propulsion systems. For example, since the launch of Deep Space 1 tens of mechanical xenon pressure regulators have successfully flown on commercial spacecraft using electric propulsion. In addition, active proportional flow controllers have flown on the Hall-thruster-equipped Tacsat-2, are flying on the ion thruster GOCE mission, and will fly next year on the Advanced EHF spacecraft. This present paper briefly reviews the Dawn xenon feed system and those implemented on other xenon electric propulsion flight missions. A simplified feed system architecture is presented that is based on assembling flight-qualified components in a manner that will reduce non-recurring engineering associated with propulsion system architecture changes, and is compared to the NASA Dawn standard. The simplified feed system includes, compared to Dawn, passive high-pressure regulation, a reduced part count, reduced complexity due to cross-strapping, and reduced non-recurring engineering work required for feed system changes. A demonstration feed system was assembled using flight-like components and used to operate a laboratory NSTAR-class ion engine. Feed system components integrated into a single-string architecture successfully operated the engine over the entire NSTAR throttle range over a series of tests. Flow rates were very stable with variations of at most 0.2%, and transition times between throttle levels were typically 90 seconds or less with a maximum of 200 seconds, both significant improvements over the Dawn bang-bang feed system.
Evaluation of gravitational gradients generated by Earth's crustal structures
NASA Astrophysics Data System (ADS)
Novák, Pavel; Tenzer, Robert; Eshagh, Mehdi; Bagherbandi, Mohammad
2013-02-01
Spectral formulas for the evaluation of gravitational gradients generated by upper Earth's mass components are presented in the manuscript. The spectral approach allows for numerical evaluation of global gravitational gradient fields that can be used to constrain gravitational gradients either synthesised from global gravitational models or directly measured by the spaceborne gradiometer on board of the GOCE satellite mission. Gravitational gradients generated by static atmospheric, topographic and continental ice masses are evaluated numerically based on available global models of Earth's topography, bathymetry and continental ice sheets. CRUST2.0 data are then applied for the numerical evaluation of gravitational gradients generated by mass density contrasts within soft and hard sediments, upper, middle and lower crust layers. Combined gravitational gradients are compared to disturbing gravitational gradients derived from a global gravitational model and an idealised Earth's model represented by the geocentric homogeneous biaxial ellipsoid GRS80. The methodology could be used for improved modelling of the Earth's inner structure.
TLE uncertainty estimation using robust weighted differencing
NASA Astrophysics Data System (ADS)
Geul, Jacco; Mooij, Erwin; Noomen, Ron
2017-05-01
Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).
Gelbard-Sagiv, Hagar; Faivre, Nathan; Mudrik, Liad; Koch, Christof
2016-01-01
The scope and limits of unconscious processing are a matter of ongoing debate. Lately, continuous flash suppression (CFS), a technique for suppressing visual stimuli, has been widely used to demonstrate surprisingly high-level processing of invisible stimuli. Yet, recent studies showed that CFS might actually allow low-level features of the stimulus to escape suppression and be consciously perceived. The influence of such low-level awareness on high-level processing might easily go unnoticed, as studies usually only probe the visibility of the feature of interest, and not that of lower-level features. For instance, face identity is held to be processed unconsciously since subjects who fail to judge the identity of suppressed faces still show identity priming effects. Here we challenge these results, showing that such high-level priming effects are indeed induced by faces whose identity is invisible, but critically, only when a lower-level feature, such as color or location, is visible. No evidence for identity processing was found when subjects had no conscious access to any feature of the suppressed face. These results suggest that high-level processing of an image might be enabled by-or co-occur with-conscious access to some of its low-level features, even when these features are not relevant to the processed dimension. Accordingly, they call for further investigation of lower-level awareness during CFS, and reevaluation of other unconscious high-level processing findings.
Water mass changes inferred by gravity field variations with GRACE
NASA Astrophysics Data System (ADS)
Fagiolini, Elisa; Gruber, Christian; Apel, Heiko; Viet Dung, Nguyen; Güntner, Andreas
2013-04-01
Since 2002 the Gravity Recovery And Climate Experiment (GRACE) mission has been measuring temporal variations of Earth's gravity field depicting with extreme accuracy how mass is distributed and varies around the globe. Advanced signal separation techniques enable to isolate different sources of mass such as atmospheric and oceanic circulation or land hydrology. Nowadays thanks to GRACE, floods, droughts, and water resources monitoring are possible on a global scale. At GFZ Potsdam scientists have been involved since 2000 in the initiation and launch of the GRACE precursor CHAMP satellite mission, since 2002 in the GRACE Science Data System and since 2009 in the frame of ESÁs GOCE High Processing Facility as well as projected GRACE FOLLOW-ON for the continuation of time variable gravity field determination. Recently GFZ has reprocessed the complete GRACE time-series of monthly gravity field spherical harmonic solutions with improved standards and background models. This new release (RL05) already shows significantly less noise and spurious artifacts. In order to monitor water mass re-distribution and fast moving water, we still need to reach a higher resolution in both time and space. Moreover, in view of disaster management applications we need to act with a shorter latency (current latency standard is 2 months). For this purpose, we developed a regional method based on radial base functions that is capable to compute models in regional and global representation. This new method localizes the gravity observation to the closest regions and omits spatial correlations with farther regions. Additionally, we succeeded to increase the temporal resolution to sub-monthly time scales. Innovative concepts such as Kalman filtering and regularization, along with sophisticated regional modeling have shifted temporal and spatial resolution towards new frontiers. We expect global hydrological models as WHGM to profit from such accurate outcomes. First results comparing the mass changes over the Mekong Delta observed with GRACE with spatial explicit hydraulic simulations of the large scale annual inundation volume during the flood season are presented and discussed.
NASA Astrophysics Data System (ADS)
Lellouche, J. M.; Le Galloudec, O.; Greiner, E.; Garric, G.; Regnier, C.; Drillet, Y.
2016-02-01
Mercator Ocean currently delivers in real-time daily services (weekly analyses and daily forecast) with a global 1/12° high resolution system. The model component is the NEMO platform driven at the surface by the IFS ECMWF atmospheric analyses and forecasts. Observations are assimilated by means of a reduced-order Kalman filter with a 3D multivariate modal decomposition of the forecast error. It includes an adaptive-error estimate and a localization algorithm. Along track altimeter data, satellite Sea Surface Temperature and in situ temperature and salinity vertical profiles are jointly assimilated to estimate the initial conditions for numerical ocean forecasting. A 3D-Var scheme provides a correction for the slowly-evolving large-scale biases in temperature and salinity.Since May 2015, Mercator Ocean opened the Copernicus Marine Service (CMS) and is in charge of the global ocean analyses and forecast, at eddy resolving resolution. In this context, R&D activities have been conducted at Mercator Ocean these last years in order to improve the real-time 1/12° global system for the next CMS version in 2016. The ocean/sea-ice model and the assimilation scheme benefit among others from the following improvements: large-scale and objective correction of atmospheric quantities with satellite data, new Mean Dynamic Topography taking into account the last version of GOCE geoid, new adaptive tuning of some observational errors, new Quality Control on the assimilated temperature and salinity vertical profiles based on dynamic height criteria, assimilation of satellite sea-ice concentration, new freshwater runoff from ice sheets melting …This presentation doesn't focus on the impact of each update, but rather on the overall behavior of the system integrating all updates. This assessment reports on the products quality improvements, highlighting the level of performance and the reliability of the new system.
NASA Astrophysics Data System (ADS)
Müller, Silvia; Brockmann, Jan Martin; Schuh, Wolf-Dieter
2015-04-01
The ocean's dynamic topography as the difference between the sea surface and the geoid reflects many characteristics of the general ocean circulation. Consequently, it provides valuable information for evaluating or tuning ocean circulation models. The sea surface is directly observed by satellite radar altimetry while the geoid cannot be observed directly. The satellite-based gravity field determination requires different measurement principles (satellite-to-satellite tracking (e.g. GRACE), satellite-gravity-gradiometry (GOCE)). In addition, hydrographic measurements (salinity, temperature and pressure; near-surface velocities) provide information on the dynamic topography. The observation types have different representations and spatial as well as temporal resolutions. Therefore, the determination of the dynamic topography is not straightforward. Furthermore, the integration of the dynamic topography into ocean circulation models requires not only the dynamic topography itself but also its inverse covariance matrix on the ocean model grid. We developed a rigorous combination method in which the dynamic topography is parameterized in space as well as in time. The altimetric sea surface heights are expressed as a sum of geoid heights represented in terms of spherical harmonics and the dynamic topography parameterized by a finite element method which can be directly related to the particular ocean model grid. Besides the difficult task of combining altimetry data with a gravity field model, a major aspect is the consistent combination of satellite data and in-situ observations. The particular characteristics and the signal content of the different observations must be adequately considered requiring the introduction of auxiliary parameters. Within our model the individual observation groups are combined in terms of normal equations considering their full covariance information; i.e. a rigorous variance/covariance propagation from the original measurements to the final product is accomplished. In conclusion, the developed integrated approach allows for estimating the dynamic topography and its inverse covariance matrix on arbitrary grids in space and time. The inverse covariance matrix contains the appropriate weights for model-data misfits in least-squares ocean model inversions. The focus of this study is on the North Atlantic Ocean. We will present the conceptual design and dynamic topography estimates based on time variable data from seven satellite altimeter missions (Jason-1, Jason-2, Topex/Poseidon, Envisat, ERS-2, GFO, Cryosat2) in combination with the latest GOCE gravity field model and in-situ data from the Argo floats and near-surface drifting buoys.
Gravity field models from kinematic orbits of CHAMP, GRACE and GOCE satellites
NASA Astrophysics Data System (ADS)
Bezděk, Aleš; Sebera, Josef; Klokočník, Jaroslav; Kostelecký, Jan
2014-02-01
The aim of our work is to generate Earth's gravity field models from GPS positions of low Earth orbiters. Our inversion method is based on Newton's second law, which relates the observed acceleration of the satellite with forces acting on it. The observed acceleration is obtained as numerical second derivative of kinematic positions. Observation equations are formulated using the gradient of the spherical harmonic expansion of the geopotential. Other forces are either modelled (lunisolar perturbations, tides) or provided by onboard measurements (nongravitational perturbations). From this linear regression model the geopotential harmonic coefficients are obtained. To this basic scheme of the acceleration approach we added some original elements, which may be useful in other inversion techniques as well. We tried to develop simple, straightforward and still statistically correct model of observations. (i) The model is linear in the harmonic coefficients, no a priori gravity field model is needed, no regularization is applied. (ii) We use the generalized least squares to successfully mitigate the strong amplification of noise due to numerical second derivative. (iii) The number of other fitted parameters is very small, in fact we use only daily biases, thus we can monitor their behaviour. (iv) GPS positions have correlated errors. The sample autocorrelation function and especially the partial autocorrelation function indicate suitability of an autoregressive model to represent the correlation structure. The decorrelation of residuals improved the accuracy of harmonic coefficients by a factor of 2-3. (v) We found it better to compute separate solutions in the three local reference frame directions than to compute them together at the same time; having obtained separate solutions for along-track, cross-track and radial components, we combine them using the normal matrices. Relative contribution of the along-track component to the combined solution is 50 percent on average. (vi) The computations were performed on an ordinary PC up to maximum degree and order 120. We applied the presented method to orbits of CHAMP and GRACE spanning seven years (2003-2009) and to two months of GOCE (Nov/Dec 2009). The obtained long-term static gravity field models are of similar or better quality compared to other published solutions. We also tried to extract the time-variable gravity signal from CHAMP and GRACE orbits. The acquired average annual signal shows clearly the continental areas with important and known hydrological variations.
Satellite observations of atmosphere-ionosphere vertical coupling by gravity waves
NASA Astrophysics Data System (ADS)
Trinh, Thai; Ern, Manfred; Preusse, Peter; Riese, Martin
2017-04-01
The Earth's thermosphere/ionosphere (T/I) is strongly influenced by various processes from above as well as from below. One of the most important processes from below is vertical coupling by atmospheric waves. Among these waves, gravity waves (GWs) excited in the lower atmosphere, mainly in the troposphere and tropopause region, are likely essential for the mean state of the T/I system. The penetration of GWs into the T/I system is however not well understood in modeling as well as observations. In this work, we analyze the correlation between different GW parameters at lower altitudes (below 90 km) and GW induced perturbations in the T/I. At lower altitudes, GW parameters are derived from temperature observations of the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). In the T/I, GW induced perturbations of neutral density measured by Gravity field and Ocean Circulation Explorer (GOCE) and CHAllenging Minisatellite Payload (CHAMP) are analyzed. Interestingly, we find positive correlations between the spatial distributions at low altitudes (i.e. below 90km) and the spatial distributions of GW-induced density fluctuations in the T/I (at 200km and above), which suggests that many waves seen in the T/I have their origins in the troposphere or lower stratosphere. It is also indicated that mountain waves generated near the Andes and Antarctic Peninsula propagate up to the T/I. Strong positive correlations between GW perturbations in the T/I and GW parameters at 30 km are mainly found at mid latitudes, which may be an indicator of propagation of convectively generated GWs. Increase of correlation starting from 70 km in many cases shows that filtering of the GW distribution by the background atmosphere is very important. Processes that are likely involved are GW dissipation, generation of secondary GWs, as well as horizontal propagation of GWs. Limitations of our method and of the observations are also discussed.
Dynamic Heights in the Great Lakes at Different Epochs
NASA Astrophysics Data System (ADS)
Roman, D. R.
2016-12-01
Vertical control in the Great Lakes region is currently defined by the International Great Lakes Datum of 1985 (IGLD 85) in the form of dynamic heights. Starting in 2025, dynamic heights will be defined through GNSS-derived geometric coordinates and a geopotential model. This paper explores the behavior of an existing geopotential model at different epochs when the Great Lakes were at significantly different (meter-level) geopotential surfaces. Water surfaces were examined in 2015 and 2010 at six sites on Lakes Superior and Lake Erie (three on each Lake). These sites have collocated a Continuously Operating Reference Station (CORS) and a Water Level Sensor (WLS). The offset between the antenna phase center for the CORS and the WLS datum are known at each site. The WLS then measures the distance from its datum to the Lake surface via an open well. Thus it is possible to determine the height above an ellipsoid datum at these sites as long as both the CORS and WLS are operational. The geometric coordinates are then used to estimate the geopotential value from the xGEOID16B model. This accomplished in two steps. To provide an improved reference model, EGM2008 was spectrally enhanced using observations from the GOCE satellite gravity mission and aerogravity from the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) Project. This enhanced model, xGEOID16B_Ref, is still only a five arcminute resolution model (d/o 2160), but resolves dynamic heights at about 2 cm on Lake Superior for December 2015. The reference model was primarily developed to determine a one arcminute geoid height grid, xGEOID16B, available on the NGS website. This geoid height model was used to iteratively develop improved geopotential value for each of the site locations, which then improved comparisons to the cm-level. Comparisons were then made at the 2010 epoch for these same locations to determine if the performance of the geopotential model was consistent.
NASA Astrophysics Data System (ADS)
Hong-bo, Wang; Chang-yin, Zhao; Wei, Zhang; Jin-wei, Zhan; Sheng-xian, Yu
2016-07-01
The Earth gravitational field model is one of the most important dynamic models in satellite orbit computation. Several space gravity missions made great successes in recent years, prompting the publishing of several gravitational filed models. In this paper, two classical (JGM3, EGM96) and four latest (EIGEN-CHAMP05S, GGM03S, GOCE02S, EGM2008) models are evaluated by employing them in the precision orbit determination (POD) and prediction. These calculations are performed based on the laser ranging observation of four Low Earth Orbit (LEO) satellites, including CHAMP, GFZ-1, GRACE-A, and SWARM-A. The residual error of observation in POD is adopted to describe the accuracy of six gravitational field models. The main results we obtained are as follows. (1) For the POD of LEOs, the accuracies of 4 latest models are at the same level, and better than those of 2 classical models; (2) Taking JGM3 as reference, EGM96 model's accuracy is better in most situations, and the accuracies of the 4 latest models are improved by 12%-47% in POD and 63% in prediction, respectively. We also confirm that the model's accuracy in POD is enhanced with the increasing degree and order if they are smaller than 70, and when they exceed 70, the accuracy keeps constant, implying that the model's degree and order truncated to 70 are sufficient to meet the requirement of LEO computation of centimeter precision.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Investigating TIME-GCM Atmospheric Tides for Different Lower Boundary Conditions
NASA Astrophysics Data System (ADS)
Haeusler, K.; Hagan, M. E.; Lu, G.; Forbes, J. M.; Zhang, X.; Doornbos, E.
2013-12-01
It has been recently established that atmospheric tides generated in the lower atmosphere significantly influence the geospace environment. In order to extend our knowledge of the various coupling mechanisms between the different atmospheric layers, we rely on model simulations. Currently there exist two versions of the Global Scale Wave Model (GSWM), i.e. GSWM02 and GSWM09, which are used as a lower boundary (ca. 30 km) condition for the Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model (TIME-GCM) and account for the upward propagating atmospheric tides that are generated in the troposphere and lower stratosphere. In this paper we explore the various TIME-GCM upper atmospheric tidal responses for different lower boundary conditions and compare the model diagnostics with tidal results from satellite missions such as TIMED, CHAMP, and GOCE. We also quantify the differences between results associated with GSWM02 and GSWM09 forcing and results of TIMEGCM simulations using Modern-Era Retrospective Analysis for Research and Application (MERRA) data as a lower boundary condition.
Comparisons Between TIME-GCM/MERRA Simulations and LEO Satellite Observations
NASA Astrophysics Data System (ADS)
Hagan, M. E.; Haeusler, K.; Forbes, J. M.; Zhang, X.; Doornbos, E.; Bruinsma, S.; Lu, G.
2014-12-01
We report on yearlong National Center for Atmospheric Research (NCAR) thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM) simulations where we utilize the recently developed lower boundary condition based on 3-hourly MERRA (Modern-Era Retrospective Analysis for Research and Application) reanalysis data to account for tropospheric waves and tides propagating upward into the model domain. The solar and geomagnetic forcing is based on prevailing geophysical conditions. The simulations show a strong day-to-day variability in the upper thermospheric neutral temperature tidal fields, which is smoothed out quickly when averaging is applied over several days, e.g. up to 50% DE3 amplitude reduction for a 10-day average. This is an important result with respect to tidal diagnostics from satellite observations where averaging over multiple days is inevitable. In order to assess TIME-GCM performance we compare the simulations with measurements from the Gravity field and steady-state Ocean Circulation Explorer (GOCE), Challenging Minisatellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites.
Parallel Processing at the High School Level.
ERIC Educational Resources Information Center
Sheary, Kathryn Anne
This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…
Future geodesy missions: Tethered systems and formation flying
NASA Astrophysics Data System (ADS)
Fontdecaba, Jordi; Sanjurjo, Manuel; Pelaez, Jesus; Metris, Gilles; Exertier, Pierre
Recent gravity field determination missions have shown the possibility of improving our Earth knowledge from space. GRACE has helped to the determination of temporal variations of low and mean degrees of the field while GOCE will improve the precision in the determination of higher degrees. But there is still some needs for geophysics which are not satisfied by these missions. Two areas where improvements must be done are (i) perenniality of the observations, and (ii) determination of temporal variations of higher degrees of the gravity field. These improvements can be achieved thanks to new measurement technologies with higher precision, but also using new observables. Historically, space determination of the gravity field has been done observing the perturbations of the orbit of the satellites. More recently, GRACE has introduced the use of satellite-tosatellite ranging. Goce will use onboard gradiometry. The authors have explored the possibilities of two new technologies for the determination of the gravity field: (i) tethered systems, and (ii) formation flying for all kind of configurations (not just leader-follower). To analyze the possibilities of these technologies, we obtain the covariance matrix of the coefficients of the gravity field for the different observables. This can be done providing some very reasonable hypothesis are accepted. This matrix contains a lot of information concerning the behavior of the observable. In order to obtain the matrix, we use the so-called lumped coefficients approach. We have used this method for three observables (i) tethered systems, (ii) formation flying and (iii) gradiometry (for comparison purposes). Tethers appear as a very long base gradiometers, with very interesting properties, but also very challenging from a technological point of view. One of the major advantages of the tethered systems is their multitask design. Indeed, the same cable can be used for propulsion purposes in some phases of the mission, and for geodesy purposes in other phases. Several studies have been presented using formation flying, but none of them is exhaustive in terms of number of satellites, configuration, and plan of the motion. We study formation flying using differential orbital elements in order to be as general as possible. The advantage of this representation is the possibility to study all sort of initial conditions and reference orbits with a posterior analysis of covariance matrices. Our results show the intrinsic possibilities of these new two systems and their comparison with existing ones. We also define some baseline scenarios for future missions.
Design of Superconducting Gravity Gradiometer Cryogenic System for Mars Mission
NASA Technical Reports Server (NTRS)
Li, X.; Lemoine, F. G.; Paik, H. J.; Zagarola, M.; Shirron, P. J.; Griggs, C. E.; Moody, M. V.; Han, S.-C.
2016-01-01
Measurement of a planet's gravity field provides fundamental information about the planet's mass properties. The static gravity field reveals information about the internal structure of the planet, including crustal density variations that provide information on the planet's geological history and evolution. The time variations of gravity result from the movement of mass inside the planet, on the surface, and in the atmosphere. NASA is interested in a Superconducting Gravity Gradiometer (SGG) with which to measure the gravity field of a planet from orbit. An SGG instrument is under development with the NASA PICASSO program, which will be able to resolve the Mars static gravity field to degree 200 in spherical harmonics, and the time-varying field on a monthly basis to degree 20 from a 255 x 320 km orbit. The SGG has a precision two orders of magnitude better than the electrostatic gravity gradiometer that was used on the ESA's GOCE mission. The SGG operates at the superconducting temperature lower than 6 K. This study developed a cryogenic thermal system to maintain the SGG at the design temperature in Mars orbit. The system includes fixed radiation shields, a low thermal conductivity support structure and a two-stage cryocooler. The fixed radiation shields use double aluminized polyimide to emit heat from the warm spacecraft into the deep space. The support structure uses carbon fiber reinforced plastic, which has low thermal conductivity at cryogenic temperature and very high stress. The low vibration cryocooler has two stages, of which the high temperature stage operates at 65 K and the low temperature stage works at 6 K, and the heat rejection radiator works at 300 K. The study also designed a second option with a 4-K adiabatic demagnetization refrigerator (ADR) and two-stage 10-K turbo-Brayton cooler.
Design of Superconducting Gravity Gradiometer Cryogenic System for Mars Mission
NASA Technical Reports Server (NTRS)
Li, X.; Lemoine, F. G.; Shirron, P. J.; Paik, H. J.; Griggs, C. E.; Moody, M. V.; Han, S. C.; Zagarola, M.
2016-01-01
Measurement of a planets gravity field provides fundamental information about the planets mass properties. The static gravity field reveals information about the internal structure of the planet, including crustal density variations that provide information on the planets geological history and evolution. The time variations of gravity result from the movement of mass inside the planet, on the surface, and in the atmosphere. NASA is interested in a Superconducting Gravity Gradiometer (SGG) with which to measure the gravity field of a planet from orbit. An SGG instrument is under development with the NASA PICASSO program, which will be able to resolve the Mars static gravity field to degree 200 in spherical harmonics, and the time-varying field on a monthly basis to degree 20 from a 255 x 320 km orbit. The SGG has a precision two orders of magnitude better than the electrostatic gravity gradiometer that was used on the ESAs GOCE mission. The SGG operates at the superconducting temperature lower than 6 K. This study developed a cryogenic thermal system to maintain the SGG at the design temperature in Mars orbit. The system includes fixed radiation shields, a low thermal conductivity support structure and a two-stage cryocooler. The fixed radiation shields use double aluminized polyimide to emit heat from the warm spacecraft into the deep space. The support structure uses carbon fiber reinforced plastic, which has low thermal conductivity at cryogenic temperature and very high stress. The low vibration cryocooler has two stages, of which the high temperature stage operates at 65 K and the low temperature stage works at 6 K, and the heat rejection radiator works at 300 K. The study also designed a second option with a 4-K adiabatic demagnetization refrigerator (ADR) and two-stage 10-K turbo-Brayton cooler.
Implementing An Image Understanding System Architecture Using Pipe
NASA Astrophysics Data System (ADS)
Luck, Randall L.
1988-03-01
This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.
ESA airborne campaigns in support of Earth Explorers
NASA Astrophysics Data System (ADS)
Casal, Tania; Davidson, Malcolm; Schuettemeyer, Dirk; Perrera, Andrea; Bianchi, Remo
2013-04-01
In the framework of its Earth Observation Programmes the European Space Agency (ESA) carries out ground based and airborne campaigns to support geophysical algorithm development, calibration/validation, simulation of future spaceborne earth observation missions, and applications development related to land, oceans and atmosphere. ESA has been conducting airborne and ground measurements campaigns since 1981 by deploying a broad range of active and passive instrumentation in both the optical and microwave regions of the electromagnetic spectrum such as lidars, limb/nadir sounding interferometers/spectrometers, high-resolution spectral imagers, advanced synthetic aperture radars, altimeters and radiometers. These campaigns take place inside and outside Europe in collaboration with national research organisations in the ESA member states as well as with international organisations harmonising European campaign activities. ESA campaigns address all phases of a spaceborne missions, from the very beginning of the design phase during which exploratory or proof-of-concept campaigns are carried out to the post-launch exploitation phase for calibration and validation. We present four recent campaigns illustrating the objectives and implementation of such campaigns. Wavemill Proof Of Concept, an exploratory campaign to demonstrate feasibility of a future Earth Explorer (EE) mission, took place in October 2011 in the Liverpool Bay area in the UK. The main objectives, successfully achieved, were to test Astrium UKs new airborne X-band SAR instrument capability to obtain high resolution ocean current and topology retrievals. Results showed that new airborne instrument is able to retrieve ocean currents to an accuracy of ± 10 cms-1. The IceSAR2012 campaign was set up to support of ESA's EE Candidate 7,BIOMASS. Its main objective was to document P-band radiometric signatures over ice-sheets, by upgrading ESA's airborne POLARIS P-band radar ice sounder with SAR capability. Campaign comprised three airborne campaigns in Greenland from April to June 2012 separated by roughly one month and preliminary results showed the instrument capability to detect ice motion. CryoVEx 2012 was a large collaborative effort to help ensure the accuracy of ESA's ice mission CryoSat. The aim of this large-scale Arctic campaign was to record sea-ice thickness and conditions of the ice exactly below the CryoSat-2 path. A range of sensors installed on different aircraft included simple cameras to get a visual record of the sea ice, laser scanners to clearly map the height of the ice, an ice-thickness sensor (EM-Bird), ESA's radar altimeter (ASIRAS) and NASA's snow and Ku-band radars, which mimic CryoSat's measurements but at a higher resolution. Preliminary results reveal the ability to detect centimetre differences between sea-ice and thin ice/water which in turn allow for the estimation of actual sea ice thickness. In support of two currently operating EE Missions: SMOS (Soil Moisture and Ocean Salinity) and GOCE (Gravity field and steady-state Ocean Circulation Explorer), DOMECair airborne campaign will take place in Antarctica, in the Dome C region during the middle of January 2013. The two main objectives are to quantify and document the spatial variability in the DOME C area, important to establish long-term cross-calibrated multi-mission L-band measurement time-series (SMOS) and fill in the gap in the high-quality gravity anomaly maps in Antarctica since airborne gravity measurements are sparse (GOCE). Key airborne instruments in the campaign are EMIRAD-2 L-band radiometer, designed and operated by DTU and a gravimeter from AWI. ESA campaigns have been fundamental and an essential part in the preparation of new Earth Observation missions, as well as in the independent validation of their measurements and quantification of error sources. For the different activities a rich variety of datasets has been recorded, are archived and users can access campaign data through the EOPI web portal [http://eopi.esa.int].
A re-evaluation of the relativistic redshift on frequency standards at NIST, Boulder, Colorado, USA
NASA Astrophysics Data System (ADS)
Pavlis, Nikolaos K.; Weiss, Marc A.
2017-08-01
We re-evaluated the relativistic redshift correction applicable to the frequency standards at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, USA, based on a precise GPS survey of three benchmarks on the roof of the building where these standards had been previously housed, and on global and regional geoid models supported by data from the GRACE and GOCE missions, including EGM2008, USGG2009, and USGG2012. We also evaluated the redshift offset based on the published NAVD88 geopotential number of the leveling benchmark Q407 located on the side of Building 1 at NIST, Boulder, Colorado, USA, after estimating the bias of the NAVD88 datum at our specific location. Based on these results, our current best estimate of the relativistic redshift correction, if frequency standards were located at the height of the leveling benchmark Q407 outside the second floor of Building 1, with respect to the EGM2008 geoid whose potential has been estimated to be {{W}0}=62 636 855.69 {{m}2} {{s}-2} , is equal to (-1798.50 ± 0.06) × 10-16. The corresponding value, with respect to an equipotential surface defined by the International Astronomical Union’s (IAU) adopted value of {{W}0}=62 636 856.0 {{m}2} {{s}-2} , is (-1798.53 ± 0.06) × 10-16. These values are comparable to the value of (-1798.70 ± 0.30) × 10-16, estimated by Pavlis and Weiss in 2003, with respect to an equipotential surface defined by {{W}0}=62 636 856.88 {{m}2} {{s}-2} . The minus sign implies that clocks run faster in the laboratory in Boulder than a corresponding clock located on the geoid. Contribution of US government, not subject to Copyright.
NASA Astrophysics Data System (ADS)
Nawani, Jigna; Rixius, Julia; Neuhaus, Birgit J.
2016-08-01
Empirical analysis of secondary biology classrooms revealed that, on average, 68% of teaching time in Germany revolved around processing tasks. Quality of instruction can thus be assessed by analyzing the quality of tasks used in classroom discourse. This quasi-experimental study analyzed how teachers used tasks in 38 videotaped biology lessons pertaining to the topic 'blood and circulatory system'. Two fundamental characteristics used to analyze tasks include: (1) required cognitive level of processing (e.g. low level information processing: repetiition, summary, define, classify and high level information processing: interpret-analyze data, formulate hypothesis, etc.) and (2) complexity of task content (e.g. if tasks require use of factual, linking or concept level content). Additionally, students' cognitive knowledge structure about the topic 'blood and circulatory system' was measured using student-drawn concept maps (N = 970 students). Finally, linear multilevel models were created with high-level cognitive processing tasks and higher content complexity tasks as class-level predictors and students' prior knowledge, students' interest in biology, and students' interest in biology activities as control covariates. Results showed a positive influence of high-level cognitive processing tasks (β = 0.07; p < .01) on students' cognitive knowledge structure. However, there was no observed effect of higher content complexity tasks on students' cognitive knowledge structure. Presented findings encourage the use of high-level cognitive processing tasks in biology instruction.
High-Level Waste System Process Interface Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
d'Entremont, P.D.
1999-01-14
The High-Level Waste System is a set of six different processes interconnected by pipelines. These processes function as one large treatment plant that receives, stores, and treats high-level wastes from various generators at SRS and converts them into forms suitable for final disposal. The three major forms are borosilicate glass, which will be eventually disposed of in a Federal Repository, Saltstone to be buried on site, and treated water effluent that is released to the environment.
NASA Astrophysics Data System (ADS)
Haqiqiansyah, G.; Sugiharto, E.
2018-04-01
This research was conducted to identify and scrutinize women empowerment of fish product processing group in the District of Sanga-Sanga on 2017. The method used was survey method, which is direct observation and interview to respondent. Data were collected in the form of primary and secondary data. Collected data then processed, tabulated, and displayed in the table and graph. The measurement of women empowerment degree was measured by Likert Scale on 3 level, that are score 1 = low, score 2 = less, and score 3 = high. The result of research demonstrated that the rate of empowerment women group of fish product processor was high (score 42,75). Partially, awareness level or willingness to change of processing enterprise group which indicate empowerment indicator categorized as high (91,67%). The level of capability to increase the chance of acquiring access was high (66,67%), the level of capability to overcome an obstacle tend to categorized as less (50%) and the level of capability to collaborate was high (66,67%). It means that the level of coastal women empowerment could be reliable to do a reformation.
NASA Astrophysics Data System (ADS)
Forsberg, R.; Olesen, A. V.; Hvidegaard, S.; Skourup, H.
2010-12-01
Airborne laser and radar measurements over the Greenland ice sheet, Svalbard, and adjacent parts of the Arctic Ocean have been carried out by DTU-Space in a number of recent Danish/Greenlandic and European project campaigns, with the purpose to monitor ice sheet and sea-ice changes, support of Greenland societal needs (oil exploration and hydropower), and support of CryoSat pre-launch calibration and validation campaigns. The Arctic campaigns have been done using a Twin-Otter aircraft, carrying laser scanners and various radars. Since 2009 a new program of long-range gravity and magnetic surveys have been initiated using a Basler DC3 aircraft for large-scale surveys in the Arctic Ocean and Antarctica, with the 2010 cooperative Danish-Argentinean-Chilean-US ICEGRAV survey of the Antarctic Peninsula additionally including a UTIG 60 MHz ice-penetrating radar. In the paper we outline the recent and upcoming airborne survey activities, outline the usefulness of the airborne data for satellite validation (CryoSat and GOCE), and give examples of measurements and comparisons to satellite and in-situ data.
Crustal thickness of Antarctica estimated using data from gravimetric satellites
NASA Astrophysics Data System (ADS)
Llubes, Muriel; Seoane, Lucia; Bruinsma, Sean; Rémy, Frédérique
2018-04-01
Computing a better crustal thickness model is still a necessary improvement in Antarctica. In this remote continent where almost all the bedrock is covered by the ice sheet, seismic investigations do not reach a sufficient spatial resolution for geological and geophysical purposes. Here, we present a global map of Antarctic crustal thickness computed from space gravity observations. The DIR5 gravity field model, built from GOCE and GRACE gravimetric data, is inverted with the Parker-Oldenburg iterative algorithm. The BEDMAP products are used to estimate the gravity effect of the ice and the rocky surface. Our result is compared to crustal thickness calculated from seismological studies and the CRUST1.0 and AN1 models. Although the CRUST1.0 model shows a very good agreement with ours, its spatial resolution is larger than the one we obtain with gravimetric data. Finally, we compute a model in which the crust-mantle density contrast is adjusted to fit the Moho depth from the CRUST1.0 model. In East Antarctica, the resulting density contrast clearly shows higher values than in West Antarctica.
The levels of perceptual processing and the neural correlates of increasing subjective visibility.
Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel
2017-10-01
According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ferdousi, B.; Nishimura, Y.; Maruyama, N.; Lyons, L. R.
2017-12-01
Subauroral Polarization Streams (SAPS), which can be identified as intense northward electric field driving sunward plasma convection, are mostly observed at the dusk-premidnight subauroral region. Their existence is associated with the closure of region 2 field-aligned current (R2 FAC) through the low conductivity region equatorward of the electron equatorward boundary. Observations suggest that SAPS flow speed increases with geomagnetic activity. So far, most studies have focused on the magnetosphere-ionosphere (M-I) coupling process of SAPS. However, recent observation of subauroral neutral wind suggest that there is a strong interaction between SAPS and the thermosphere (T). In this study, we focus on the effect of thermospheric wind on the ionosphere plasma drift associated with SAPS during the March 17, 2013 "St. Patrick's day" geomagnetic storm. We use both observations and the self-consistent magnetosphere-ionosphere-thermosphere (M-I-T) numerical "RCM-CTIPe" model to study such a relation. Observation results from DMSP-18 and GOCE satellites show that as the storm progresses, sunward ion flows intensify and move equatorward, and are accompanied by strengthening of subauroral neutral winds with a 2-hour delay. Our model successfully reproduces time evolution of the sunward ion drift and neutral wind. However, the simulated ion drift spreads considerably wider in latitude than the observations. To seek for better agreement between the observation and simulation results, we adopt a conductance distribution more consistent with input from the magnetosphere based on RCM aurora precipitation. We also perform a force term analysis to investigate the rate of momentum transfer from the neutral wind to ion flow. We then compare simulation runs with and without thermosphere coupling to study the effect of the feedback from neutral winds to SAPS.
Metacognitive Analysis of Pre-Service Teachers of Chemistry in Posting Questions
NASA Astrophysics Data System (ADS)
Santoso, T.; Yuanita, L.
2017-04-01
Questions addressed to something can induce metacognitive function to monitor a person’s thinking process. This study aims to describe the structure of the level of student questions based on thinking level and chemistry understanding level and describe how students use their metacognitive knowledge in asking. This research is a case study in chemistry learning, followed by 87 students. Results of the analysis revealed that the structure of thinking level of student question consists of knowledge question, understanding and application question, and high thinking question; the structure of chemistry understanding levels of student questions are a symbol, macro, macro-micro, macro-process, micro-process, and the macro-micro-process. The level Questioning skill of students to scientific articles more qualified than the level questioning skills of students to the teaching materials. The analysis result of six student interviews, a student question demonstrate the metacognitive processes with categories: (1) low-level metacognitive process, which is compiled based on questions focusing on a particular phrase or change the words; (2) intermediate level metacognitive process, submission of questions requires knowledge and understanding, and (3) high-level metacognitive process, the student questions posed based on identifying the central topic or abstraction essence of scientific articles.
Performance Evaluation of the T6 Ion Engine
NASA Technical Reports Server (NTRS)
Snyder, John Steven; Goebel, Dan M.; Hofer, Richard R.; Polk, James E.; Wallace, Neil C.; Simpson, Huw
2010-01-01
The T6 ion engine is a 22-cm diameter, 4.5-kW Kaufman-type ion thruster produced by QinetiQ, Ltd., and is baselined for the European Space Agency BepiColombo mission to Mercury and is being qualified under ESA sponsorship for the extended range AlphaBus communications satellite platform. The heritage of the T6 includes the T5 ion thruster now successfully operating on the ESA GOCE spacecraft. As a part of the T6 development program, an engineering model thruster was subjected to a suite of performance tests and plume diagnostics at the Jet Propulsion Laboratory. The engine was mounted on a thrust stand and operated over its nominal throttle range of 2.5 to 4.5 kW. In addition to the typical electrical and flow measurements, an E x B mass analyzer, scanning Faraday probe, thrust vector probe, and several near-field probes were utilized. Thrust, beam divergence, double ion content, and thrust vector movement were all measured at four separate throttle points. The engine performance agreed well with published data on this thruster. At full power the T6 produced 143 mN of thrust at a specific impulse of 4120 seconds and an efficiency of 64%; optimization of the neutralizer for lower flow rates increased the specific impulse to 4300 seconds and the efficiency to nearly 66%. Measured beam divergence was less than, and double ion content was greater than, the ring-cusp-design NSTAR thruster that has flown on NASA missions. The measured thrust vector offset depended slightly on throttle level and was found to increase with time as the thruster approached thermal equilibrium.
Neuropsychological Components of Imagery Processing, Final Technical Report.
ERIC Educational Resources Information Center
Kosslyn, Stephen M.
High-level visual processes make use of stored information, and are invoked during object identification, navigation, tracking, and visual mental imagery. The work presented in this document has resulted in a theory of the component "processing subsystems" used in high-level vision. This theory was developed by considering…
Enhanced Perceptual Processing of Speech in Autism
ERIC Educational Resources Information Center
Jarvinen-Pasley, Anna; Wallace, Gregory L.; Ramus, Franck; Happe, Francesca; Heaton, Pamela
2008-01-01
Theories of autism have proposed that a bias towards low-level perceptual information, or a featural/surface-biased information-processing style, may compromise higher-level language processing in such individuals. Two experiments, utilizing linguistic stimuli with competing low-level/perceptual and high-level/semantic information, tested…
NASA Astrophysics Data System (ADS)
Le Galloudec, Olivier; Lellouche, Jean-Michel; Greiner, Eric; Garric, Gilles; Régnier, Charly; Drévillon, Marie; Drillet, Yann
2017-04-01
Since May 2015, Mercator Ocean opened the Copernicus Marine Environment and Monitoring Service (CMEMS) and is in charge of the global eddy resolving ocean analyses and forecast. In this context, Mercator Ocean currently delivers in real-time daily services (weekly analyses and daily forecast) with a global 1/12° high resolution system. The model component is the NEMO platform driven at the surface by the IFS ECMWF atmospheric analyses and forecasts. Observations are assimilated by means of a reduced-order Kalman filter with a 3D multivariate modal decomposition of the forecast error. It includes an adaptive-error estimate and a localization algorithm. Along track altimeter data, satellite Sea Surface Temperature and in situ temperature and salinity vertical profiles are jointly assimilated to estimate the initial conditions for numerical ocean forecasting. A 3D-Var scheme provides a correction for the slowly-evolving large-scale biases in temperature and salinity. R&D activities have been conducted at Mercator Ocean these last years to improve the real-time 1/12° global system for recent updated CMEMS version in 2016. The ocean/sea-ice model and the assimilation scheme benefited of the following improvements: large-scale and objective correction of atmospheric quantities with satellite data, new Mean Dynamic Topography taking into account the last version of GOCE geoid, new adaptive tuning of some observational errors, new Quality Control on the assimilated temperature and salinity vertical profiles based on dynamic height criteria, assimilation of satellite sea-ice concentration, new freshwater runoff from ice sheets melting, … This presentation will show the impact of some updates separately, with a particular focus on adaptive tuning experiments of satellite Sea Level Anomaly (SLA) and Sea Surface Temperature (SST) observations errors. For the SLA, the a priori prescribed observation error is globally greatly reduced. The median value of the error changed from 5cm to 2.5cm in a few assimilation cycles. For the SST, we chose to maintain the median value of the error to 0.4°C. The spatial distribution of the SST error follows the model physics and atmospheric variability. Either for SLA or SST, we improve the performances of the system using this adaptive tuning. The overall behavior of the system integrating all updates reporting on the products quality improvements will be also discussed, highlighting the level of performance and the reliability of the new system.
Adapting high-level language programs for parallel processing using data flow
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1988-01-01
EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.
Uckoo, Ram M; Jayaprakasha, Guddadarangavvanahally K; Balasubramaniam, V M; Patil, Bhimanagouda S
2012-09-01
Grapefruits (Citrus paradisi Macfad) contain several phytochemicals known to have health maintaining properties. Due to the consumer's interest in obtaining high levels of these phytochemicals, it is important to understand the changes in their levels by common household processing techniques. Therefore, mature Texas "Rio Red" grapefruits were processed by some of the common household processing practices such as blending, juicing, and hand squeezing techniques and analyzed for their phytochemical content by high performance liquid chromatography (HPLC). Results suggest that grapefruit juice processed by blending had significantly (P < 0.05) higher levels of flavonoids (narirutin, naringin, hesperidin, neohesperidin, didymin, and poncirin) and limonin compared to juicing and hand squeezing. No significant variation in their content was noticed in the juice processed by juicing and hand squeezing. Ascorbic acid and citric acid were significantly (P < 0.05) higher in juice processed by juicing and blending, respectively. Furthermore, hand squeezed fruit juice had significantly higher contents of dihydroxybergamottin (DHB) than juice processed by juicing and blending. Bergamottin and 5-methoxy-7 gernoxycoumarin (5-M-7-GC) were significantly higher in blended juice compared to juicing and hand squeezing. Therefore, consuming grapefruit juice processed by blending may provide higher levels of health beneficial phytochemicals such as naringin, narirutin, and poncirin. In contrast, juice processed by hand squeezing and juicing provides lower levels of limonin, bergamottin, and 5-M-7-GC. These results suggest that, processing techniques significantly influence the levels of phytochemicals and blending is a better technique for obtaining higher levels of health beneficial phytochemicals from grapefruits. Practical Application: Blending, squeezing, and juicing are common household processing techniques used for obtaining fresh grapefruit juice. Understanding the levels of health beneficial phytochemicals present in the juice processed by these techniques would enable the consumers to make a better choice to obtain high level of these compounds. © 2012 Institute of Food Technologists®
2015-09-01
this report made use of posttest processing techniques to provide packet-level time tagging with an accuracy close to 3 µs relative to Coordinated...h set of test records. The process described herein made use of posttest processing techniques to provide packet-level time tagging with an accuracy
New Space at Airbus Defence & Space to facilitate science missions
NASA Astrophysics Data System (ADS)
Boithias, Helene; Benchetrit, Thierry
2016-10-01
In addition to Airbus legacy activities, where Airbus satellites usually enable challenging science missions such as Venus Express, Mars Express, Rosetta with an historic landing on a comet, Bepi Colombo mission to Mercury and JUICE to orbit around Jupiter moon Ganymede, Swarm studying the Earth magnetic field, Goce to measure the Earth gravitational field and Cryosat to monitor the Earth polar ice, Airbus is now developing a new approach to facilitate next generation missions.After more than 25 years of collaboration with the scientists on space missions, Airbus has demonstrated its capacity to implement highly demanding missions implying a deep understanding of the science mission requirements and their intrinsic constraints such as- a very fierce competition between the scientific communities,- the pursuit of high maturity for the science instrument in order to be selected,- the very strict institutional budget limiting the number of operational missions.As a matter of fact, the combination of these constraints may lead to the cancellation of valuable missions.Based on that and inspired by the New Space trend, Airbus is developing an highly accessible concept called HYPE.The objective of HYPE is to make access to Space much more simple, affordable and efficient.With a standardized approach, the scientist books only the capacities he needs among the resources available on-board, as the HYPE satellites can host a large range of payloads from 1kg up to 60kg.At prices significantly more affordable than those of comparable dedicated satellite, HYPE is by far a very cost-efficient way of bringing science missions to life.After the launch, the scientist enjoys a plug-and-play access to two-way communications with his instrument through a secure high-speed portal available online 24/7.Everything else is taken care of by Airbus: launch services and the associated risk, reliable power supply, setting up and operating the communication channels, respect of space law regulation.We will present the HYPE opportunity, being open to the scientists view with the concern to have the concept tuned as close as possible to their needs.
Internal curvature signal and noise in low- and high-level vision
Grabowecky, Marcia; Kim, Yee Joon; Suzuki, Satoru
2011-01-01
How does internal processing contribute to visual pattern perception? By modeling visual search performance, we estimated internal signal and noise relevant to perception of curvature, a basic feature important for encoding of three-dimensional surfaces and objects. We used isolated, sparse, crowded, and face contexts to determine how internal curvature signal and noise depended on image crowding, lateral feature interactions, and level of pattern processing. Observers reported the curvature of a briefly flashed segment, which was presented alone (without lateral interaction) or among multiple straight segments (with lateral interaction). Each segment was presented with no context (engaging low-to-intermediate-level curvature processing), embedded within a face context as the mouth (engaging high-level face processing), or embedded within an inverted-scrambled-face context as a control for crowding. Using a simple, biologically plausible model of curvature perception, we estimated internal curvature signal and noise as the mean and standard deviation, respectively, of the Gaussian-distributed population activity of local curvature-tuned channels that best simulated behavioral curvature responses. Internal noise was increased by crowding but not by face context (irrespective of lateral interactions), suggesting prevention of noise accumulation in high-level pattern processing. In contrast, internal curvature signal was unaffected by crowding but modulated by lateral interactions. Lateral interactions (with straight segments) increased curvature signal when no contextual elements were added, but equivalent interactions reduced curvature signal when each segment was presented within a face. These opposing effects of lateral interactions are consistent with the phenomena of local-feature contrast in low-level processing and global-feature averaging in high-level processing. PMID:21209356
EmptyHeaded: A Relational Engine for Graph Processing
Aberger, Christopher R.; Tu, Susan; Olukotun, Kunle; Ré, Christopher
2016-01-01
There are two types of high-performance graph processing engines: low- and high-level engines. Low-level engines (Galois, PowerGraph, Snap) provide optimized data structures and computation models but require users to write low-level imperative code, hence ensuring that efficiency is the burden of the user. In high-level engines, users write in query languages like datalog (SociaLite) or SQL (Grail). High-level engines are easier to use but are orders of magnitude slower than the low-level graph engines. We present EmptyHeaded, a high-level engine that supports a rich datalog-like query language and achieves performance comparable to that of low-level engines. At the core of EmptyHeaded’s design is a new class of join algorithms that satisfy strong theoretical guarantees but have thus far not achieved performance comparable to that of specialized graph processing engines. To achieve high performance, EmptyHeaded introduces a new join engine architecture, including a novel query optimizer and data layouts that leverage single-instruction multiple data (SIMD) parallelism. With this architecture, EmptyHeaded outperforms high-level approaches by up to three orders of magnitude on graph pattern queries, PageRank, and Single-Source Shortest Paths (SSSP) and is an order of magnitude faster than many low-level baselines. We validate that EmptyHeaded competes with the best-of-breed low-level engine (Galois), achieving comparable performance on PageRank and at most 3× worse performance on SSSP. PMID:28077912
Occupational Noise Reduction in CNC Striping Process
NASA Astrophysics Data System (ADS)
Mahmad Khairai, Kamarulzaman; Shamime Salleh, Nurul; Razlan Yusoff, Ahmad
2018-03-01
Occupational noise hearing loss with high level exposure is common occupational hazards. In CNC striping process, employee that exposed to high noise level for a long time as 8-hour contributes to hearing loss, create physical and psychological stress that reduce productivity. In this paper, CNC stripping process with high level noises are measured and reduced to the permissible noise exposure. First condition is all machines shutting down and second condition when all CNC machine under operations. For both conditions, noise exposures were measured to evaluate the noise problems and sources. After improvement made, the noise exposures were measured to evaluate the effectiveness of reduction. The initial average noise level at the first condition is 95.797 dB (A). After the pneumatic system with leakage was solved, the noise reduced to 55.517 dB (A). The average noise level at the second condition is 109.340 dB (A). After six machines were gathered at one area and cover that area with plastic curtain, the noise reduced to 95.209 dB (A). In conclusion, the noise level exposure in CNC striping machine is high and exceed the permissible noise exposure can be reduced to acceptable levels. The reduction of noise level in CNC striping processes enhanced productivity in the industry.
Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.
Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit
2015-09-09
Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.
van Boxtel, Jeroen J A; Lu, Hongjing
2013-01-01
People with Autism Spectrum Disorder (ASD) are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.
van Boxtel, Jeroen J. A.; Lu, Hongjing
2013-01-01
People with Autism Spectrum Disorder (ASD) are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD. PMID:23630514
Anxiety, anticipation and contextual information: A test of attentional control theory.
Cocks, Adam J; Jackson, Robin C; Bishop, Daniel T; Williams, A Mark
2016-09-01
We tested the assumptions of Attentional Control Theory (ACT) by examining the impact of anxiety on anticipation using a dynamic, time-constrained task. Moreover, we examined the involvement of high- and low-level cognitive processes in anticipation and how their importance may interact with anxiety. Skilled and less-skilled tennis players anticipated the shots of opponents under low- and high-anxiety conditions. Participants viewed three types of video stimuli, each depicting different levels of contextual information. Performance effectiveness (response accuracy) and processing efficiency (response accuracy divided by corresponding mental effort) were measured. Skilled players recorded higher levels of response accuracy and processing efficiency compared to less-skilled counterparts. Processing efficiency significantly decreased under high- compared to low-anxiety conditions. No difference in response accuracy was observed. When reviewing directional errors, anxiety was most detrimental to performance in the condition conveying only contextual information, suggesting that anxiety may have a greater impact on high-level (top-down) cognitive processes, potentially due to a shift in attentional control. Our findings provide partial support for ACT; anxiety elicited greater decrements in processing efficiency than performance effectiveness, possibly due to predominance of the stimulus-driven attentional system.
High pressure liquid level monitor
Bean, Vern E.; Long, Frederick G.
1984-01-01
A liquid level monitor for tracking the level of a coal slurry in a high-pressure vessel including a toroidal-shaped float with magnetically permeable bands thereon disposed within the vessel, two pairs of magnetic field generators and detectors disposed outside the vessel adjacent the top and bottom thereof and magnetically coupled to the magnetically permeable bands on the float, and signal processing circuitry for combining signals from the top and bottom detectors for generating a monotonically increasing analog control signal which is a function of liquid level. The control signal may be utilized to operate high-pressure control valves associated with processes in which the high-pressure vessel is used.
Online sensing and control of oil in process wastewater
NASA Astrophysics Data System (ADS)
Khomchenko, Irina B.; Soukhomlinoff, Alexander D.; Mitchell, T. F.; Selenow, Alexander E.
2002-02-01
Industrial processes, which eliminate high concentration of oil in their waste stream, find it extremely difficult to measure and control the water purification process. Most oil separation processes involve chemical separation using highly corrosive caustics, acids, surfactants, and emulsifiers. Included in the output of this chemical treatment process are highly adhesive tar-like globules, emulsified and surface oils, and other emulsified chemicals, in addition to suspended solids. The level of oil/hydrocarbons concentration in the wastewater process may fluctuate from 1 ppm to 10,000 ppm, depending upon the specifications of the industry and level of water quality control. The authors have developed a sensing technology, which provides the accuracy of scatter/absorption sensing in a contactless environment by combining these methodologies with reflective measurement. The sensitivity of the sensor may be modified by changing the fluid level control in the flow cell, allowing for a broad range of accurate measurement from 1 ppm to 10,000 ppm. Because this sensing system has been designed to work in a highly invasive environment, it can be placed close to the process source to allow for accurate real time measurement and control.
Higher levels of depression are associated with reduced global bias in visual processing.
de Fockert, Jan W; Cooper, Andrew
2014-04-01
Negative moods have been associated with a tendency to prioritise local details in visual processing. The current study investigated the relation between depression and visual processing using the Navon task, a standard task of local and global processing. In the Navon task, global stimuli are presented that are made up of many local parts, and the participants are instructed to report the identity of either a global or a local target shape. Participants with a low self-reported level of depression showed evidence of the expected global processing bias, and were significantly faster at responding to the global, compared with the local level. By contrast, no such difference was observed in participants with high levels of depression. The reduction of the global bias associated with high levels of depression was only observed in the overall speed of responses to global (versus local) targets, and not in the level of interference produced by the global (versus local) distractors. These results are in line with recent findings of a dissociation between local/global processing bias and interference from local/global distractors, and support the claim that depression is associated with a reduction in the tendency to prioritise global-level processing.
The Action Execution Process Implemented in Different Cognitive Architectures: A Review
NASA Astrophysics Data System (ADS)
Dong, Daqi; Franklin, Stan
2014-12-01
An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent's high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.
Automated defect spatial signature analysis for semiconductor manufacturing process
Tobin, Jr., Kenneth W.; Gleason, Shaun S.; Karnowski, Thomas P.; Sari-Sarraf, Hamed
1999-01-01
An apparatus and method for performing automated defect spatial signature alysis on a data set representing defect coordinates and wafer processing information includes categorizing data from the data set into a plurality of high level categories, classifying the categorized data contained in each high level category into user-labeled signature events, and correlating the categorized, classified signature events to a present or incipient anomalous process condition.
Zhang, Xiaomeng; Bartol, Kathryn M
2010-09-01
Integrating theories addressing attention and activation with creativity literature, we found an inverted U-shaped relationship between creative process engagement and overall job performance among professionals in complex jobs in an information technology firm. Work experience moderated the curvilinear relationship, with low-experience employees generally exhibiting higher levels of overall job performance at low to moderate levels of creative process engagement and high-experience employees demonstrating higher overall performance at moderate to high levels of creative process engagement. Creative performance partially mediated the relationship between creative process engagement and job performance. These relationships were tested within a moderated mediation framework. Copyright 2010 APA, all rights reserved
Meijer, Willemien A; Van Gerven, Pascal W; de Groot, Renate H; Van Boxtel, Martin P; Jolles, Jelle
2007-10-01
The aim of the present study was to examine whether deeper processing of words during encoding in middle-aged adults leads to a smaller increase in word-learning performance and a smaller decrease in retrieval effort than in young adults. It was also assessed whether high education attenuates age-related differences in performance. Accuracy of recall and recognition, and reaction times of recognition, after performing incidental and intentional learning tasks were compared between 40 young (25-35) and 40 middle-aged (50-60) adults with low and high educational levels. Age differences in recall increased with depth of processing, whereas age differences in accuracy and reaction times of recognition did not differ across levels. High education does not moderate age-related differences in performance. These findings suggest a smaller benefit of deep processing in middle age, when no retrieval cues are available.
Montalvo, Itziar; Gutiérrez-Zotes, Alfonso; Creus, Marta; Monseny, Rosa; Ortega, Laura; Franch, Joan; Lawrie, Stephen M; Reynolds, Rebecca M; Vilella, Elisabet; Labad, Javier
2014-01-01
Hyperprolactinaemia, a common side effect of some antipsychotic drugs, is also present in drug-naïve psychotic patients and subjects at risk for psychosis. Recent studies in non-psychiatric populations suggest that increased prolactin may have negative effects on cognition. The aim of our study was to explore whether high plasma prolactin levels are associated with poorer cognitive functioning in subjects with early psychoses. We studied 107 participants: 29 healthy subjects and 78 subjects with an early psychosis (55 psychotic disorders with <3 years of illness, 23 high-risk subjects). Cognitive assessment was performed with the MATRICS Cognitive Consensus Cognitive Battery, and prolactin levels were determined as well as total cortisol levels in plasma. Psychopathological status was assessed and the use of psychopharmacological treatments (antipsychotics, antidepressants, benzodiazepines) recorded. Prolactin levels were negatively associated with cognitive performance in processing speed, in patients with a psychotic disorder and high-risk subjects. In the latter group, increased prolactin levels were also associated with impaired reasoning and problem solving and poorer general cognition. In a multiple linear regression analysis conducted in both high-risk and psychotic patients, controlling for potential confounders, prolactin and benzodiazepines were independently related to poorer cognitive performance in the speed of processing domain. A mediation analysis showed that both prolactin and benzodiazepine treatment act as mediators of the relationship between risperidone/paliperidone treatment and speed of processing. These results suggest that increased prolactin levels are associated with impaired processing speed in early psychosis. If these results are confirmed in future studies, strategies targeting reduction of prolactin levels may improve cognition in this population.
The Effects of Test Anxiety on Learning at Superficial and Deep Levels of Processing.
ERIC Educational Resources Information Center
Weinstein, Claire E.; And Others
1982-01-01
Using a deep-level processing strategy, low test-anxious college students performed significantly better than high test-anxious students in learning a paired-associate word list. Using a superficial-level processing strategy resulted in no significant difference in performance. A cognitive-attentional theory and test anxiety mechanisms are…
Status of the planar electrostatic gradiometer GREMLIT for airborne geodesy
NASA Astrophysics Data System (ADS)
Boulanger, D.; Foulon, B.; Lebat, V.; Bresson, A.; Christophe, B.
2016-12-01
Taking advantage of technologies, developed by ONERA for the GRACE and GOCE space missions, the GREMLIT airborne gravity gradiometer is based of a planar electrostatic gradiometer configuration. The feasibility of the instrument and of its performance was proved by realistic simulations, based on actual data and recorded environmental aircraft perturbations, with performance of about one Eötvös along the two horizontal components of the gravity gradient. In order to assess the operation of the electrostatic gradiometer on its associated stabilized platform, a one axis prototype has also been built. The next step is the realization of the stabilization platform, controlled by the common mode outputs of the instrument itself, in order to reject the perturbations induced by the airborne environment in the horizontal directions. One of the interests of the GREMLIT instrument is the possibility of an easy hybrid configuration with a vertical one axis Cold Atoms Interferometer gravity gradiometer called GIBON and also under development at ONERA. In such hybrid instrument, The CAI instrument takes also advantage of the platform stabilized by the electrostatic one. The poster will emphasize the status of realization of the instrument and of its stabilized platform.
Slab Geometry and Segmentation on Seismogenic Subduction Zone; Insight from gravity gradients
NASA Astrophysics Data System (ADS)
Saraswati, A. T.; Mazzotti, S.; Cattin, R.; Cadio, C.
2017-12-01
Slab geometry is a key parameter to improve seismic hazard assessment in subduction zones. In many cases, information about structures beneath subduction are obtained from geophysical dedicated studies, including geodetic and seismic measurements. However, due to the lack of global information, both geometry and segmentation in seismogenic zone of many subductions remain badly-constrained. Here we propose an alternative approach based on satellite gravity observations. The GOCE (Gravity field and steady-state Ocean Circulation Explorer) mission enables to probe Earth deep mass structures from gravity gradients, which are more sensitive to spatial structure geometry and directional properties than classical gravitational data. Gravity gradients forward modeling of modeled slab is performed by using horizontal and vertical gravity gradient components to better determine slab geophysical model rather than vertical gradient only. Using polyhedron method, topography correction on gravity gradient signal is undertaken to enhance the anomaly signal of lithospheric structures. Afterward, we compare residual gravity gradients with the calculated signals associated with slab geometry. In this preliminary study, straightforward models are used to better understand the characteristic of gravity gradient signals due to deep mass sources. We pay a special attention to the delineation of slab borders and dip angle variations.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
The effect of spatial attention on invisible stimuli.
Shin, Kilho; Stolte, Moritz; Chong, Sang Chul
2009-10-01
The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.
Cognitive Processes and Learner Strategies in the Acquisition of Motor Skills
1978-12-01
children: Capacity or processing deficits? Memory and Cognition, 1976, 4, 559-S72. Craik , F. I. M., & Lockhart , R. S. Levels of processing : A framework...learning and memory research. In F. I. M. Craik & L. S. Cermak (Eds.), Levels of processing and theories of memory. Hillsdale, N. J.: Erlbaum, 1978...functions. Cognitive activities are described at a highly theoretical (technical) level as well as in a pragmatic manner. Differences in processing
Evaluation of gravitational curvatures of a tesseroid in spherical integral kernels
NASA Astrophysics Data System (ADS)
Deng, Xiao-Le; Shen, Wen-Bin
2018-04-01
Proper understanding of how the Earth's mass distributions and redistributions influence the Earth's gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east-east-radial, north-north-radial and radial-radial-radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15^' } } × 15^' }} at GOCE satellite height can reach of about 10^{-16} m^{-1} s2 for zero order, 10^{-24 } or 10^{-23} m^{-1} s2 for second order, 10^{-29} m^{-1} s2 for fourth order and 10^{-35} or 10^{-34} m^{-1} s2 for sixth order, respectively.
NASA Astrophysics Data System (ADS)
Minakov, A.; Medvedev, S.
2017-12-01
Analysis of lithospheric stresses is necessary to gain understanding of the forces that drive plate tectonics and intraplate deformations and the structure and strength of the lithosphere. A major source of lithospheric stresses is believed to be in variations of surface topography and lithospheric density. The traditional approach to stress estimation is based on direct calculations of the Gravitational Potential Energy (GPE), the depth integrated density moment of the lithosphere column. GPE is highly sensitive to density structure which, however, is often poorly constrained. Density structure of the lithosphere may be refined using methods of gravity modeling. However, the resulted density models suffer from non-uniqueness of the inverse problem. An alternative approach is to directly estimate lithospheric stresses (depth integrated) from satellite gravimetry data. Satellite gravity gradient measurements by the ESA GOCE mission ensures a wealth of data for mapping lithospheric stresses if a link between data and stresses or GPE can be established theoretically. The non-uniqueness of interpretation of sources of the gravity signal holds in this case as well. Therefore, the data analysis was tested for the North Atlantic region where reliable additional constraints are supplied by both controlled-source and earthquake seismology. The study involves comparison of three methods of stress modeling: (1) the traditional modeling approach using a thin sheet approximation; (2) the filtered geoid approach; and (3) the direct utilization of the gravity gradient tensor. Whereas the first two approaches (1)-(2) calculate GPE and utilize a computationally expensive finite element mechanical modeling to calculate stresses, the approach (3) uses a much simpler numerical treatment but requires simplifying assumptions that yet to be tested. The modeled orientation of principal stresses and stress magnitudes by each of the three methods are compared with the World Stress Map.
Deschrijver, Eliane; Wiersema, Jan R; Brass, Marcel
2017-04-01
For more than 15 years, motor interference paradigms have been used to investigate the influence of action observation on action execution. Most research on so-called automatic imitation has focused on variables that play a modulating role or investigated potential confounding factors. Interestingly, furthermore, a number of functional magnetic resonance imaging (fMRI) studies have tried to shed light on the functional mechanisms and neural correlates involved in imitation inhibition. However, these fMRI studies, presumably due to poor temporal resolution, have primarily focused on high-level processes and have neglected the potential role of low-level motor and perceptual processes. In the current EEG study, we therefore aimed to disentangle the influence of low-level perceptual and motoric mechanisms from high-level cognitive mechanisms. We focused on potential congruency differences in the visual N190 - a component related to the processing of biological motion, the Readiness Potential - a component related to motor preparation, and the high-level P3 component. Interestingly, we detected congruency effects in each of these components, suggesting that the interference effect in an automatic imitation paradigm is not only related to high-level processes such as self-other distinction but also to more low-level influences of perception on action and action on perception. Moreover, we documented relationships of the neural effects with (autistic) behavior.
Personal Striving Level and Self-Evaluation Process.
ERIC Educational Resources Information Center
Orias, John; Leung, Lisa; Dosanj, Shikha; McAnlis, JoAnna; Levy, Gal; Sheposh, John P.
Three studies were conducted to determine if goal striving level was related to accurate self-knowledge. The purpose of the research was to determine if the tendency of high strivers to confront stressful stimuli extends to self-evaluation processes. Three experiments were designed to investigate whether high strivers differ from low strivers in…
Multiple Theory Formation in High-Level Perception. Technical Report No. 38.
ERIC Educational Resources Information Center
Woods, William A.
This paper is concerned with the process of human reading as a high-level perceptual task. Drawing on insights from artificial-intelligence research--specifically, research in natural language processing and continuous speech understanding--the paper attempts to present a fairly concrete picture of the kinds of hypothesis formation and inference…
Advanced glycation endproducts in 35 types of seafood products consumed in eastern China
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Zhenxing; Pavase, Ramesh Tushar; Lin, Hong; Zou, Long; Wen, Jie; Lv, Liangtao
2016-08-01
Advanced glycation endproducts (AGEs) have been recognized as hazards in processed foods that can induce chronic diseases such as cardiovascular disease, diabetes, and diabetic nephropathy. In this study, we investigated the AGEs contents of 35 types of industrial seafood products that are consumed frequently in eastern China. Total fluorescent AGEs level and Nɛ-carboxymethyl-lysine (CML) content were evaluated by fluorescence spectrophotometry and gas chromatography-mass spectrometry (GC-MS), respectively. The level of total fluorescent AGEs in seafood samples ranged from 39.37 to 1178.3 AU, and was higher in canned and packaged instant aquatic products that were processed at high temperatures. The CML content in seafood samples ranged from 44.8 to 439.1 mg per kg dried sample, and was higher in roasted seafood samples. The total fluorescent AGEs and CML content increased when seafood underwent high-temperature processing, but did not show an obvious correlation. The present study suggested that commonly consumed seafood contains different levels of AGEs, and the seafood processed at high temperatures always displays a high level of either AGEs or CML.
Level indicator for pressure vessels
Not Available
1982-04-28
A liquid-level monitor for tracking the level of a coal slurry in a high-pressure vessel including a toroidal-shaped float with magnetically permeable bands thereon disposed within the vessel, two pairs of magnetic-field generators and detectors disposed outside the vessel adjacent the top and bottom thereof and magnetically coupled to the magnetically permeable bands on the float, and signal-processing circuitry for combining signals from the top and bottom detectors for generating a monotonically increasing analog control signal which is a function of liquid level. The control signal may be utilized to operate high-pressure control valves associated with processes in which the high-pressure vessel is used.
NASA Astrophysics Data System (ADS)
Shuai, W.; Jaffe, P. R.
2017-12-01
Effective ammonium (NH4+) removal has been a challenge in wastewater treatment processes. Aeration, which is required for the conventional NH4+ removal approach by ammonium oxidizing bacteria, is an energy intensive process during the operation of wastewater treatment plant. The efficiency of NH4+ oxidation in natural systems is also limited by oxygen transfer in water and sediments. The objective of this study is to enhance NH4+ removal by applying a novel microbial process, anaerobic NH4+ oxidation coupled to iron (Fe) reduction (also known as Feammox), in constructed wetlands (CW). Our studies have shown that an Acidimicrobiaceae bacterium named A6 can carry out the Feammox process using ferric Fe (Fe(III)) minerals like ferrihydrite as their electron acceptor. To investigate the properties of the Feammox process in CW as well as the influence of electrodes, Feammox bacterium A6 was inoculated in planted CW mesocosms with electrodes installed at multiple depths. CW mesocosms were operated using high NH4+ nutrient solution as inflow under high or low sediment Fe(III) level. During the operation, NH4+ and ferrous Fe concentration, pore water pH, voltages between electrodes, oxidation reduction potential and dissolved oxygen were measured. At the end of the experiment, CW sediment samples at different depths were taken, DNAs were extracted and quantitative polymerase chain reaction and pyrosequencing were performed to analyze the microbial communities. The results show that the high Fe level CW mesocosm has much higher NH4+ removal ability than the low Fe level CW mesocosm after Fe-reducing conditions are developed. This indicates the enhanced NH4+ removal can be attributed to elevated Feammox activity in high Fe level CW mesocosm. The microbial community structures are different in high or low Fe level CW mesocosms and on or away from the installed electrodes. The voltages between cathode and anode increased after the injection of A6 enrichment culture in low Fe level CW mesocosm but remained stable in high Fe level CW mesocosm, indicating A6 may use electrodes as their electron acceptor in the scarcity of Fe(III). The application of Feammox process in Fe-rich CW is promising in providing a cost and energy effective NH4+ removal approach, and the electrogenesis of A6 may also be useful in enhancing the Feammox process.
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2007-01-01
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school…
Montalvo, Itziar; Gutiérrez-Zotes, Alfonso; Creus, Marta; Monseny, Rosa; Ortega, Laura; Franch, Joan; Lawrie, Stephen M.; Reynolds, Rebecca M.; Vilella, Elisabet; Labad, Javier
2014-01-01
Hyperprolactinaemia, a common side effect of some antipsychotic drugs, is also present in drug-naïve psychotic patients and subjects at risk for psychosis. Recent studies in non-psychiatric populations suggest that increased prolactin may have negative effects on cognition. The aim of our study was to explore whether high plasma prolactin levels are associated with poorer cognitive functioning in subjects with early psychoses. We studied 107 participants: 29 healthy subjects and 78 subjects with an early psychosis (55 psychotic disorders with <3 years of illness, 23 high-risk subjects). Cognitive assessment was performed with the MATRICS Cognitive Consensus Cognitive Battery, and prolactin levels were determined as well as total cortisol levels in plasma. Psychopathological status was assessed and the use of psychopharmacological treatments (antipsychotics, antidepressants, benzodiazepines) recorded. Prolactin levels were negatively associated with cognitive performance in processing speed, in patients with a psychotic disorder and high-risk subjects. In the latter group, increased prolactin levels were also associated with impaired reasoning and problem solving and poorer general cognition. In a multiple linear regression analysis conducted in both high-risk and psychotic patients, controlling for potential confounders, prolactin and benzodiazepines were independently related to poorer cognitive performance in the speed of processing domain. A mediation analysis showed that both prolactin and benzodiazepine treatment act as mediators of the relationship between risperidone/paliperidone treatment and speed of processing. These results suggest that increased prolactin levels are associated with impaired processing speed in early psychosis. If these results are confirmed in future studies, strategies targeting reduction of prolactin levels may improve cognition in this population. PMID:24586772
High level language for measurement complex control based on the computer E-100I
NASA Technical Reports Server (NTRS)
Zubkov, B. V.
1980-01-01
A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.
Process for solidifying high-level nuclear waste
Ross, Wayne A.
1978-01-01
The addition of a small amount of reducing agent to a mixture of a high-level radioactive waste calcine and glass frit before the mixture is melted will produce a more homogeneous glass which is leach-resistant and suitable for long-term storage of high-level radioactive waste products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas C.; Hart, Todd R.; Neuenschwander, Gary G.
Through the use of a metal catalyst, gasification of wet algae slurries can be accomplished with high levels of carbon conversion to gas at relatively low temperature (350 C). In a pressurized-water environment (20 MPa), near-total conversion of the organic structure of the algae to gases has been achieved in the presence of a supported ruthenium metal catalyst. The process is essentially steam reforming, as there is no added oxidizer or reagent other than water. In addition, the gas produced is a medium-heating value gas due to the synthesis of high levels of methane, as dictated by thermodynamic equilibrium. Asmore » opposed to earlier work, biomass trace components were removed by processing steps so that they did not cause processing difficulties in the fixed catalyst bed tubular reactor system. As a result, the algae feedstocks, even those with high ash contents, were much more reliably processed. High conversions were obtained even with high slurry concentrations. Consistent catalyst operation in these short-term tests suggested good stability and minimal poisoning effects. High methane content in the product gas was noted with significant carbon dioxide captured in the aqueous byproduct in combination with alkali constituents and the ammonia byproduct derived from proteins in the algae. High conversion of algae to gas products was found with low levels of byproduct water contamination and low to moderate loss of carbon in the mineral separation step.« less
Low-level information and high-level perception: the case of speech in noise.
Nahum, Mor; Nelken, Israel; Ahissar, Merav
2008-05-20
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.
Positive Disintegration as a Process of Symmetry Breaking.
Laycraft, Krystyna
2017-04-01
This article presents an analysis of the positive disintegration as a process of symmetry breaking. Symmetry breaking plays a major role in self-organized patterns formation and correlates directly to increasing complexity and function specialization. According to Dabrowski, a creator of the Theory of Positive Disintegration, the change from lower to higher levels of human development requires a major restructuring of an individual's psychological makeup. Each level of human development is a relatively stable and coherent configuration of emotional-cognitive patterns called developmental dynamisms. Their main function is to restructure a mental structure by breaking the symmetry of a low level and bringing differentiation and then integration to higher levels. The positive disintegration is then the process of transitions from a lower level of high symmetry and low complexity to higher levels of low symmetry and high complexity of mental structure.
Thermal quenching effect of an infrared deep level in Mg-doped p-type GaN films
NASA Astrophysics Data System (ADS)
Kim, Keunjoo; Chung, Sang Jo
2002-03-01
The thermal quenching of an infrared deep level of 1.2-1.5 eV has been investigated on Mg-doped p-type GaN films, using one- and two-step annealing processes and photocurrent measurements. The deep level appeared in the one-step annealing process at a relatively high temperature of 900 °C, but disappeared in the two-step annealing process with a low-temperature step and a subsequent high-temperature step. The persistent photocurrent was residual in the sample including the deep level, while it was terminated in the sample without the deep level. This indicates that the deep level is a neutral hole center located above a quasi-Fermi level, estimated with an energy of EpF=0.1-0.15 eV above the valence band at a hole carrier concentration of 2.0-2.5×1017/cm3.
Temporal distance and person memory: thinking about the future changes memory for the past.
Wyer, Natalie A; Perfect, Timothy J; Pahl, Sabine
2010-06-01
Psychological distance has been shown to influence how people construe an event such that greater distance produces high-level construal (characterized by global or holistic processing) and lesser distance produces low-level construal (characterized by detailed or feature-based processing). The present research tested the hypothesis that construal level has carryover effects on how information about an event is retrieved from memory. Two experiments manipulated temporal distance and found that greater distance (high-level construal) improves face recognition and increases retrieval of the abstract features of an event, whereas lesser distance (low-level construal) impairs face recognition and increases retrieval of the concrete details of an event. The findings have implications for transfer-inappropriate processing accounts of face recognition and event memory, and suggest potential applications in forensic settings.
Willinger, Ulrike; Hergovich, Andreas; Schmoeger, Michaela; Deckert, Matthias; Stoettner, Susanne; Bunda, Iris; Witting, Andrea; Seidler, Melanie; Moser, Reinhilde; Kacena, Stefanie; Jaeckle, David; Loader, Benjamin; Mueller, Christian; Auff, Eduard
2017-05-01
Humour processing is a complex information-processing task that is dependent on cognitive and emotional aspects which presumably influence frame-shifting and conceptual blending, mental operations that underlie humour processing. The aim of the current study was to find distinctive groups of subjects with respect to black humour processing, intellectual capacities, mood disturbance and aggressiveness. A total of 156 adults rated black humour cartoons and conducted measurements of verbal and nonverbal intelligence, mood disturbance and aggressiveness. Cluster analysis yields three groups comprising following properties: (1) moderate black humour preference and moderate comprehension; average nonverbal and verbal intelligence; low mood disturbance and moderate aggressiveness; (2) low black humour preference and moderate comprehension; average nonverbal and verbal intelligence, high mood disturbance and high aggressiveness; and (3) high black humour preference and high comprehension; high nonverbal and verbal intelligence; no mood disturbance and low aggressiveness. Age and gender do not differ significantly, differences in education level can be found. Black humour preference and comprehension are positively associated with higher verbal and nonverbal intelligence as well as higher levels of education. Emotional instability and higher aggressiveness apparently lead to decreased levels of pleasure when dealing with black humour. These results support the hypothesis that humour processing involves cognitive as well as affective components and suggest that these variables influence the execution of frame-shifting and conceptual blending in the course of humour processing.
A second generation 50 Mbps VLSI level zero processing system prototype
NASA Technical Reports Server (NTRS)
Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby
1994-01-01
Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.
Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.
Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana
2017-08-09
The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states. Copyright © 2017 the authors 0270-6474/17/377772-10$15.00/0.
Lumetta, Gregg J; Braley, Jenifer C; Peterson, James M; Bryan, Samuel A; Levitskaia, Tatiana G
2012-06-05
Removing phosphate from alkaline high-level waste sludges at the Department of Energy's Hanford Site in Washington State is necessary to increase the waste loading in the borosilicate glass waste form that will be used to immobilize the highly radioactive fraction of these wastes. We are developing a process which first leaches phosphate from the high-level waste solids with aqueous sodium hydroxide, and then isolates the phosphate by precipitation with calcium oxide. Tests with actual tank waste confirmed that this process is an effective method of phosphate removal from the sludge and offers an additional option for managing the phosphorus in the Hanford tank waste solids. The presence of vibrationally active species, such as nitrate and phosphate ions, in the tank waste processing streams makes the phosphate removal process an ideal candidate for monitoring by Raman or infrared spectroscopic means. As a proof-of-principle demonstration, Raman and Fourier transform infrared (FTIR) spectra were acquired for all phases during a test of the process with actual tank waste. Quantitative determination of phosphate, nitrate, and sulfate in the liquid phases was achieved by Raman spectroscopy, demonstrating the applicability of Raman spectroscopy for the monitoring of these species in the tank waste process streams.
Pangalos, George
2001-01-01
Background The Internet provides many advantages when used for interaction and data sharing among health care providers, patients, and researchers. However, the advantages provided by the Internet come with a significantly greater element of risk to the confidentiality, integrity, and availability of information. It is therefore essential that Health Care Establishments processing and exchanging medical data use an appropriate security policy. Objective To develop a High Level Security Policy for the processing of medical data and their transmission through the Internet, which is a set of high-level statements intended to guide Health Care Establishment personnel who process and manage sensitive health care information. Methods We developed the policy based on a detailed study of the existing framework in the EU countries, USA, and Canada, and on consultations with users in the context of the Intranet Health Clinic project. More specifically, this paper has taken into account the major directives, technical reports, law, and recommendations that are related to the protection of individuals with regard to the processing of personal data, and the protection of privacy and medical data on the Internet. Results We present a High Level Security Policy for Health Care Establishments, which includes a set of 7 principles and 45 guidelines detailed in this paper. The proposed principles and guidelines have been made as generic and open to specific implementations as possible, to provide for maximum flexibility and adaptability to local environments. The High Level Security Policy establishes the basic security requirements that must be addressed to use the Internet to safely transmit patient and other sensitive health care information. Conclusions The High Level Security Policy is primarily intended for large Health Care Establishments in Europe, USA, and Canada. It is clear however that the general framework presented here can only serve as reference material for developing an appropriate High Level Security Policy in a specific implementation environment. When implemented in specific environments, these principles and guidelines must also be complemented by measures, which are more specific. Even when a High Level Security Policy already exists in an institution, it is advisable that the management of the Health Care Establishment periodically revisits it to see whether it should be modified or augmented. PMID:11720956
Ilioudis, C; Pangalos, G
2001-01-01
The Internet provides many advantages when used for interaction and data sharing among health care providers, patients, and researchers. However, the advantages provided by the Internet come with a significantly greater element of risk to the confidentiality, integrity, and availability of information. It is therefore essential that Health Care Establishments processing and exchanging medical data use an appropriate security policy. To develop a High Level Security Policy for the processing of medical data and their transmission through the Internet, which is a set of high-level statements intended to guide Health Care Establishment personnel who process and manage sensitive health care information. We developed the policy based on a detailed study of the existing framework in the EU countries, USA, and Canada, and on consultations with users in the context of the Intranet Health Clinic project. More specifically, this paper has taken into account the major directives, technical reports, law, and recommendations that are related to the protection of individuals with regard to the processing of personal data, and the protection of privacy and medical data on the Internet. We present a High Level Security Policy for Health Care Establishments, which includes a set of 7 principles and 45 guidelines detailed in this paper. The proposed principles and guidelines have been made as generic and open to specific implementations as possible, to provide for maximum flexibility and adaptability to local environments. The High Level Security Policy establishes the basic security requirements that must be addressed to use the Internet to safely transmit patient and other sensitive health care information. The High Level Security Policy is primarily intended for large Health Care Establishments in Europe, USA, and Canada. It is clear however that the general framework presented here can only serve as reference material for developing an appropriate High Level Security Policy in a specific implementation environment. When implemented in specific environments, these principles and guidelines must also be complemented by measures, which are more specific. Even when a High Level Security Policy already exists in an institution, it is advisable that the management of the Health Care Establishment periodically revisits it to see whether it should be modified or augmented.
ERIC Educational Resources Information Center
Huang, Yueh-Min; Shadiev, Rustam; Sun, Ai; Hwang, Wu-Yuin; Liu, Tzu-Yu
2017-01-01
For this study the researchers designed learning activities to enhance students' high level cognitive processes. Students learned new information in a classroom setting and then applied and analyzed their new knowledge in familiar authentic contexts by taking pictures of objects found there, describing them, and sharing their homework with peers.…
Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield
Gayet, Surya; Van der Stigchel, Stefan; Paffen, Chris L. E.
2014-01-01
Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis. PMID:24904476
Mercury Phase II Study - Mercury Behavior across the High-Level Waste Evaporator System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bannochie, C. J.; Crawford, C. L.; Jackson, D. G.
2016-06-17
The Mercury Program team’s effort continues to develop more fundamental information concerning mercury behavior across the liquid waste facilities and unit operations. Previously, the team examined the mercury chemistry across salt processing, including the Actinide Removal Process/Modular Caustic Side Solvent Extraction Unit (ARP/MCU), and the Defense Waste Processing Facility (DWPF) flowsheets. This report documents the data and understanding of mercury across the high level waste 2H and 3H evaporator systems.
Effects of consumer food preparation on acrylamide formation.
Jackson, Lauren S; Al-Taher, Fadwa
2005-01-01
Acrylamide is formed in high-carbohydrate foods during high temperature processes such as frying, baking, roasting and extrusion. Although acrylamide is known to form during industrial processing of food, high levels of the chemical have been found in home-cooked foods, mainly potato- and grain-based products. This chapter will focus on the effects of cooking conditions (e.g. time/temperature) on acrylamide formation in consumer-prepared foods, the use of surface color (browning) as an indicator of acrylamide levels in some foods, and methods for reducing acrylamide levels in home-prepared foods. As with commercially processed foods, acrylamide levels in home-prepared foods tend to increase with cooking time and temperature. In experiments conducted at the NCFST, we found that acrylamide levels in cooked food depended greatly on the cooking conditions and the degree of "doneness", as measured by the level of surface browning. For example, French fries fried at 150-190 degrees C for up to 10 min had acrylamide levels of 55 to 2130 microg/kg (wet weight), with the highest levels in the most processed (highest frying times/temperatures) and the most highly browned fries. Similarly, more acrylamide was formed in "dark" toasted bread slices (43.7-610.7 microg/kg wet weight), than "light" (8.27-217.5 microg/kg) or "medium" (10.9-213.7 microg/kg) toasted slices. Analysis of the surface color by colorimetry indicated that some components of surface color ("a" and "L" values) correlated highly with acrylamide levels. This indicates that the degree of surface browning could be used as an indicator of acrylamide formation during cooking. Soaking raw potato slices in water before frying was effective at reducing acrylamide levels in French fries. Additional studies are needed to develop practical methods for reducing acrylamide formation in home-prepared foods without changing the acceptability of these foods.
Eventual Participation in GMES of Institute of Geodesy and Geoinformation
NASA Astrophysics Data System (ADS)
Balodis, J.; Janpaule, I.; Rubans, A.; Zarinjsh, A.; Abele, M.; Ubelis, A.; Cekule, M.
2012-04-01
The new project has been commenced at the University of Latvia (LU) - "Fotonika-LV". Three institutes, namely institute of Atomphysics and Spectroscopy, Institute of Astronomy and Institute of Geodesy and Geoinformation have succeeded to receive the sources for proposed development of photonics in applied research. Several highly advanced partner institutions have agreed to establish partnership in planned research activities. The photonics plays an important role at the R&D of the Institute of Geodesy and Geoinformation of the University of Latvia (LU GGI). The Institute applies the space related technologies for the environmental studies in Latvia. Photonics has been applied in satellite laser ranging systems. The small size modern satellite laser ranging system (SLR) and its control software has been developed at the Institute recently. SLR will be used for the regular observations of low Earth orbiters (LAGEOS, GOCE, GRACE, ERS2, ENVISAT, CRYOSAT, etc.) within the framework of ILRS. The test observations have proved the results of high quality. Sentinel mission satellites could be observed as well if they will have the laser retroreflectors. The developed SLR is a small size. It could be improved for the mobility applications in variety of sites if needed. Another SLR machine is under construction with planned application for remote sensing satellite calibration by using the white laser beam. Additionally it could be used for Galileo and LAGEOS observation. However, in order to use the SLR for Galileo and other higher orbit satellites the most sensitive photonics are needed. From other side the proper optical devices are needed for SLR observations at the daylight. Additionally, the sky in the Baltic region is frequently covered by the thin clouds which make the satellite observations very complicated and less productive. The optimal choice of photonics in each case is needed and best solutions are required. The CCD matrices combination with especially made optical devices has been used in SLR system for the visual guidance of the satellite tracking. The R&D in CCD matrices construction and application technologies are developed with increased variety in several countries. The application of advanced CCD techniques in SLR systems is a very powerful tool. To follow the CCD techniques development race and to apply photonics achievements in various devices of geodetic techniques is very important for the Institute. Also development of the mobile zenith camera for determination of vertical deflection is being carried out. Zenith camera will serve for the studies of anomalies of regional gravitation field. The observation procedure is based on the analysis of the stellar sky imageries obtained on the CCD matrices similarly like in star tracker. Both the resolution quality and the sensitivity of the matrices are the key elements for high accuracy vertical deflection determination. Application of Global Navigation Satellite Systems (GNSS) in geodesy discovers a powerful tool for the verification and validation of the height values of geodetic levelling benchmarks established historically long time ago. The differential GNSS and RTK methods appear very useful to identify the vertical displacement of landscape by means of inspection of the deformation of levelling networks. Within the European framework of ground based GNSS European positioning augmentation system EUPOS® the local EUPOS®-Riga continuously operating geodetic reference system has been developed by LU GGI in co-operation with Rigas GeoMetrs land surveying company. The system consists of 5 GNSS base station network located within the framework of Riga city. The system has been properly investigated and controlled. The GNSS observation RTCM corrections produced by the EUPOS®-Riga system can be used for high precision position determination in various navigation and land surveying applications. All the developments carried out at the Institute serve for the studies and for the monitoring of the environmental changes. The satellite imagery maps are used too. The knowledge of photonics discovers additional capabilities in development of airborne and spaceborn applications for earth observation. The ongoing process is accelerated in co-operation with a Riga Technical University. Scientific staff of "Fotonika-LV" project and LU GGI are looking forward for eventual participation in GMES project.
Horizontal tuning for faces originates in high-level Fusiform Face Area.
Goffaux, Valerie; Duecker, Felix; Hausfeld, Lars; Schiltz, Christine; Goebel, Rainer
2016-01-29
Recent work indicates that the specialization of face visual perception relies on the privileged processing of horizontal angles of facial information. This suggests that stimulus properties assumed to be fully resolved in primary visual cortex (V1; e.g., orientation) in fact determine human vision until high-level stages of processing. To address this hypothesis, the present fMRI study explored the orientation sensitivity of V1 and high-level face-specialized ventral regions such as the Occipital Face Area (OFA) and Fusiform Face Area (FFA) to different angles of face information. Participants viewed face images filtered to retain information at horizontal, vertical or oblique angles. Filtered images were viewed upright, inverted and (phase-)scrambled. FFA responded most strongly to the horizontal range of upright face information; its activation pattern reliably separated horizontal from oblique ranges, but only when faces were upright. Moreover, activation patterns induced in the right FFA and the OFA by upright and inverted faces could only be separated based on horizontal information. This indicates that the specialized processing of upright face information in the OFA and FFA essentially relies on the encoding of horizontal facial cues. This pattern was not passively inherited from V1, which was found to respond less strongly to horizontal than other orientations likely due to adaptive whitening. Moreover, we found that orientation decoding accuracy in V1 was impaired for stimuli containing no meaningful shape. By showing that primary coding in V1 is influenced by high-order stimulus structure and that high-level processing is tuned to selective ranges of primary information, the present work suggests that primary and high-level levels of the visual system interact in order to modulate the processing of certain ranges of primary information depending on their relevance with respect to the stimulus and task at hand. Copyright © 2015 Elsevier Ltd. All rights reserved.
Michaeli, Yael; Sinik, Keren; Haus-Cohen, Maya; Reiter, Yoram
2012-04-01
Short-lived protein translation products are proposed to be a major source of substrates for major histocompatibility complex (MHC) class I antigen processing and presentation; however, a direct link between protein stability and the presentation level of MHC class I-peptide complexes has not been made. We have recently discovered that the peptide Tyr((369-377)) , derived from the tyrosinase protein is highly presented by HLA-A2 on the surface of melanoma cells. To examine the molecular mechanisms responsible for this presentation, we compared characteristics of tyrosinase in melanoma cells lines that present high or low levels of HLA-A2-Tyr((369-377)) complexes. We found no correlation between mRNA levels and the levels of HLA-A2-Tyr((369-377)) presentation. Co-localization experiments revealed that, in cell lines presenting low levels of HLA-A2-Tyr((369-377)) complexes, tyrosinase co-localizes with LAMP-1, a melanosome marker, whereas in cell lines presenting high HLA-A2-Tyr((369-377)) levels, tyrosinase localizes to the endoplasmic reticulum. We also observed differences in tyrosinase molecular weight and glycosylation composition as well as major differences in protein stability (t(1/2) ). By stabilizing the tyrosinase protein, we observed a dramatic decrease in HLA-A2-tyrosinase presentation. Our findings suggest that aberrant processing and instability of tyrosinase are responsible for the high presentation of HLA-A2-Tyr((369-377)) complexes and thus shed new light on the relationship between intracellular processing, stability of proteins, and MHC-restricted peptide presentation. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Santos, Juliana Lane Paixão Dos; Samapundo, Simbarashe; Biyikli, Ayse; Van Impe, Jan; Akkermans, Simen; Höfte, Monica; Abatih, Emmanuel Nji; Sant'Ana, Anderson S; Devlieghere, Frank
2018-05-19
Heat-resistant moulds (HRMs) are well known for their ability to survive pasteurization and spoil high-acid food products, which is of great concern for processors of fruit-based products worldwide. Whilst the majority of the studies on HRMs over the last decades have addressed their inactivation, few data are currently available regarding their contamination levels in fruit and fruit-based products. Thus, this study aimed to quantify and identify heat-resistant fungal ascospores from samples collected throughout the processing of pasteurized high-acid fruit products. In addition, an assessment on the effect of processing on the contamination levels of HRMs in these products was carried out. A total of 332 samples from 111 batches were analyzed from three processing plants (=three processing lines): strawberry puree (n = 88, Belgium), concentrated orange juice (n = 90, Brazil) and apple puree (n = 154, the Netherlands). HRMs were detected in 96.4% (107/111) of the batches and 59.3% (197/332) of the analyzed samples. HRMs were present in 90.9% of the samples from the strawberry puree processing line (1-215 ascospores/100 g), 46.7% of the samples from the orange juice processing line (1-200 ascospores/100 g) and 48.7% of samples from the apple puree processing line (1-84 ascospores/100 g). Despite the high occurrence, the majority (76.8%, 255/332) of the samples were either not contaminated or presented low levels of HRMs (<10 ascospores/100 g). For both strawberry puree and concentrated orange juice, processing had no statistically significant effect on the levels of HRMs (p > 0.05). On the contrary, a significant reduction (p < 0.05) in HRMs levels was observed during the processing of apple puree. Twelve species were identified belonging to four genera - Byssochlamys, Aspergillus with Neosartorya-type ascospores, Talaromyces and Rasamsonia. N. fumigata (23.6%), N. fischeri (19.1%) and B. nivea (5.5%) were the predominant species in pasteurized products. The quantitative data (contamination levels of HRMs) were fitted to exponential distributions and will ultimately be included as input to spoilage risk assessment models which would allow better control of the spoilage of heat treated fruit products caused by heat-resistant moulds. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ekici, Didem Inel
2016-01-01
This study aimed to determine Turkish junior high-school students' perceptions of the general problem-solving process. The Turkish junior high-school students' perceptions of the general problem-solving process were examined in relation to their gender, grade level, age and their grade point with regards to the science course identified in the…
Behavioral inhibition and anxiety: The moderating roles of inhibitory control and attention shifting
White, Lauren K.; McDermott, Jennifer Martin; Degnan, Kathryn A.; Henderson, Heather A.; Fox, Nathan A.
2013-01-01
Behavioral inhibition (BI), a temperament identified in early childhood, is associated with social reticence in childhood and an increased risk for anxiety problems in adolescence and adulthood. However, not all behaviorally inhibited children remain reticent or develop an anxiety disorder. One possible mechanism accounting for the variability in the developmental trajectories of BI is a child’s ability to successfully recruit cognitive processes involved in the regulation of negative reactivity. However, separate cognitive processes may differentially moderate the association between BI and later anxiety problems. The goal of the current study was to examine how two cognitive processes - attention shifting and inhibitory control - laboratory assessed at 48 months of age moderated the association between 24-month BI and anxiety symptoms in the preschool years. Results revealed that high levels of attention shifting decreased the risk for anxiety symptoms in children with high levels of BI, whereas high levels of inhibitory control increased this risk for anxiety symptoms. These findings suggest that different cognitive processes may influence relative levels of risk or adaptation depending upon a child’s temperamental reactivity. PMID:21301953
White, Lauren K; McDermott, Jennifer Martin; Degnan, Kathryn A; Henderson, Heather A; Fox, Nathan A
2011-07-01
Behavioral inhibition (BI), a temperament identified in early childhood, is associated with social reticence in childhood and an increased risk for anxiety problems in adolescence and adulthood. However, not all behaviorally inhibited children remain reticent or develop an anxiety disorder. One possible mechanism accounting for the variability in the developmental trajectories of BI is a child's ability to successfully recruit cognitive processes involved in the regulation of negative reactivity. However, separate cognitive processes may differentially moderate the association between BI and later anxiety problems. The goal of the current study was to examine how two cognitive processes-attention shifting and inhibitory control-laboratory assessed at 48 months of age moderated the association between 24-month BI and anxiety symptoms in the preschool years. Results revealed that high levels of attention shifting decreased the risk for anxiety problems in children with high levels of BI, whereas high levels of inhibitory control increased this risk for anxiety symptoms. These findings suggest that different cognitive processes may influence relative levels of risk or adaptation depending upon a child's temperamental reactivity.
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
Neural Correlates of Subliminal Language Processing
Axelrod, Vadim; Bar, Moshe; Rees, Geraint; Yovel, Galit
2015-01-01
Language is a high-level cognitive function, so exploring the neural correlates of unconscious language processing is essential for understanding the limits of unconscious processing in general. The results of several functional magnetic resonance imaging studies have suggested that unconscious lexical and semantic processing is confined to the posterior temporal lobe, without involvement of the frontal lobe—the regions that are indispensable for conscious language processing. However, previous studies employed a similarly designed masked priming paradigm with briefly presented single and contextually unrelated words. It is thus possible, that the stimulation level was insufficiently strong to be detected in the high-level frontal regions. Here, in a high-resolution fMRI and multivariate pattern analysis study we explored the neural correlates of subliminal language processing using a novel paradigm, where written meaningful sentences were suppressed from awareness for extended duration using continuous flash suppression. We found that subjectively and objectively invisible meaningful sentences and unpronounceable nonwords could be discriminated not only in the left posterior superior temporal sulcus (STS), but critically, also in the left middle frontal gyrus. We conclude that frontal lobes play a role in unconscious language processing and that activation of the frontal lobes per se might not be sufficient for achieving conscious awareness. PMID:24557638
ERIC Educational Resources Information Center
Hollander, Cara; de Andrade, Victor Manuel
2014-01-01
Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…
How High School Students Select a College.
ERIC Educational Resources Information Center
Gilmour, Joseph E., Jr.; And Others
The college selection process used by high school students was studied and a paradigm that describes the process was developed, based on marketing theory concerning consumer behavior. Primarily college freshmen and high school seniors were interviewed, and a few high school juniors and upper-level college students were surveyed to determine…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, K.R.; Hansen, F.R.; Napolitano, L.M.
1992-01-01
DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, K.R.; Hansen, F.R.; Napolitano, L.M.
1992-01-01
DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less
Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language
NASA Astrophysics Data System (ADS)
Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud
2017-08-01
This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.
Serum irisin and myostatin levels after 2 weeks of high-altitude climbing.
Śliwicka, Ewa; Cisoń, Tomasz; Kasprzak, Zbigniew; Nowak, Alicja; Pilaczyńska-Szcześniak, Łucja
2017-01-01
Exposure to high-altitude hypoxia causes physiological and metabolic adaptive changes by disturbing homeostasis. Hypoxia-related changes in skeletal muscle affect the closely interconnected energy and regeneration processes. The balance between protein synthesis and degradation in the skeletal muscle is regulated by several molecules such as myostatin, cytokines, vitamin D, and irisin. This study investigates changes in irisin and myostatin levels in male climbers after a 2-week high-altitude expedition, and their association with 25(OH)D and indices of inflammatory processes. The study was performed in 8 men aged between 23 and 31 years, who participated in a 2-week climbing expedition in the Alps. The measurements of body composition and serum concentrations of irisin, myostatin, 25(OH)D, interleukin-6, myoglobin, high-sensitivity C-reactive protein, osteoprotegerin, and high-sensitivity soluble receptor activator of NF-κB ligand (sRANKL) were performed before and after expedition. A 2-week exposure to hypobaric hypoxia caused significant decrease in body mass, body mass index (BMI), free fat mass and irisin, 25-Hydroxyvitamin D levels. On the other hand, significant increase in the levels of myoglobin, high-sensitivity C-reactive protein, interleukin-6, and osteoprotegerin were noted. The observed correlations of irisin with 25(OH)D levels, as well as myostatin levels with inflammatory markers and the OPG/RANKL ratio indicate that these myokines may be involved in the energy-related processes and skeletal muscle regeneration in response to 2-week exposure to hypobaric hypoxia.
Processed foods and the nutrition transition: evidence from Asia.
Baker, P; Friel, S
2014-07-01
This paper elucidates the role of processed foods and beverages in the 'nutrition transition' underway in Asia. Processed foods tend to be high in nutrients associated with obesity and diet-related non-communicable diseases: refined sugar, salt, saturated and trans-fats. This paper identifies the most significant 'product vectors' for these nutrients and describes changes in their consumption in a selection of Asian countries. Sugar, salt and fat consumption from processed foods has plateaued in high-income countries, but has rapidly increased in the lower-middle and upper-middle-income countries. Relative to sugar and salt, fat consumption in the upper-middle- and lower-middle-income countries is converging most rapidly with that of high-income countries. Carbonated soft drinks, baked goods, and oils and fats are the most significant vectors for sugar, salt and fat respectively. At the regional level there appears to be convergence in consumption patterns of processed foods, but country-level divergences including high levels of consumption of oils and fats in Malaysia, and soft drinks in the Philippines and Thailand. This analysis suggests that more action is needed by policy-makers to prevent or mitigate processed food consumption. Comprehensive policy and regulatory approaches are most likely to be effective in achieving these goals. © 2014 The Authors. obesity reviews © 2014 World Obesity.
Nonlinear Associations Between Co-Rumination and Both Social Support and Depression Symptoms.
Ames-Sikora, Alyssa M; Donohue, Meghan Rose; Tully, Erin C
2017-08-18
Co-ruminating about one's problems appears to involve both beneficial self-disclosure and harmful rumination, suggesting that moderate levels may be the most adaptive. This study used nonlinear regression to determine whether moderate levels of self-reported co-rumination in relationships with a sibling, parent, friend, and romantic partner are linked to the highest levels of self-perceived social support and lowest levels of self-reported depression symptoms in 175 emerging adults (77% female; M = 19.66 years). As expected, moderate co-rumination was associated with high social support across all four relationship types, but, somewhat unexpectedly, high levels of co-rumination were also associated with high social support. As predicted, moderate levels of co-rumination with friends and siblings were associated with low levels of depression. Contrary to hypotheses, high levels of co-rumination were associated with high depression within romantic relationships. Co-rumination with a parent did not have a linear or quadratic association with depression. These findings suggest that high co-ruminating in supportive relationships and to a lesser extent low co-ruminating in unsupportive relationships are maladaptive interpersonal processes but that co-rumination's relation to depression depends on the co-ruminating partner. Psychotherapies for depression may target these maladaptive processes by supporting clients' development of balanced self-focused negative talk.
Bilingualism and Processing of Elementary Cognitive Tasks by Chicano Adolescents.
ERIC Educational Resources Information Center
Nanez, Jose E., Sr.; Padilla, Raymond V.
1995-01-01
Two experiments conducted with 49 Chicano high school students, aged 15-17, found that balanced-proficient bilingual speakers had slower rates of cognitive information processing at the level of short-term memory than did unbalanced bilingual speakers. The groups did not differ in rates of cognitive information processing at the simpler level of…
Expectations of Faculty, Parents, and Students for Due Process in Campus Disciplinary Hearings.
ERIC Educational Resources Information Center
Janosik, Steven M.
2001-01-01
A sample of 464 faculty members, parents, and students responded to a questionnaire that assessed their expectations for due process in campus disciplinary hearings. Respondents indicated they expected high levels of due process would be provided in suspension-level campus disciplinary hearings. The three groups differed on specific due process…
Applications of massively parallel computers in telemetry processing
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon
1994-01-01
Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).
A global "imaging'' view on systems approaches in immunology.
Ludewig, Burkhard; Stein, Jens V; Sharpe, James; Cervantes-Barragan, Luisa; Thiel, Volker; Bocharov, Gennady
2012-12-01
The immune system exhibits an enormous complexity. High throughput methods such as the "-omic'' technologies generate vast amounts of data that facilitate dissection of immunological processes at ever finer resolution. Using high-resolution data-driven systems analysis, causal relationships between complex molecular processes and particular immunological phenotypes can be constructed. However, processes in tissues, organs, and the organism itself (so-called higher level processes) also control and regulate the molecular (lower level) processes. Reverse systems engineering approaches, which focus on the examination of the structure, dynamics and control of the immune system, can help to understand the construction principles of the immune system. Such integrative mechanistic models can properly describe, explain, and predict the behavior of the immune system in health and disease by combining both higher and lower level processes. Moving from molecular and cellular levels to a multiscale systems understanding requires the development of methodologies that integrate data from different biological levels into multiscale mechanistic models. In particular, 3D imaging techniques and 4D modeling of the spatiotemporal dynamics of immune processes within lymphoid tissues are central for such integrative approaches. Both dynamic and global organ imaging technologies will be instrumental in facilitating comprehensive multiscale systems immunology analyses as discussed in this review. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High pressure processing and its application to the challenge of virus-contaminated foods
USDA-ARS?s Scientific Manuscript database
High pressure processing (HPP) is an increasingly popular non-thermal food processing technology. Study of HPP’s potential to inactivate foodborne viruses has defined general pressure levels required to inactivate hepatitis A virus, norovirus surrogates, and human norovirus itself within foods such...
P/M Processing of Rare Earth Modified High Strength Steels.
1980-12-01
AA094 165 TRW INC CLEVELAND OH MATERIALS TECHNOLOGY F 6 P/N PROCESSING OF RARE EARTH MODIFIED HIGH STRENGTH STEELS DEC So A A SHEXM(ER NOOŕT76-C...LEVEL’ (7 PIM PROCESSING OF RARE EARTH MODIFIED HIGH STRENGTH STEELS By A. A. SHEINKER 00 TECHNICAL REPORT Prepared for Office of Naval Research...Processing of Rare Earth Modified High 1 Technical -’ 3t eC"Strength Steels * 1dc4,093Se~ 9PEFRIGOGNZTONAEADADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
NASA Astrophysics Data System (ADS)
Yazid, N. M.; Din, A. H. M.; Omar, K. M.; Som, Z. A. M.; Omar, A. H.; Yahaya, N. A. Z.; Tugi, A.
2016-09-01
Global geopotential models (GGMs) are vital in computing global geoid undulations heights. Based on the ellipsoidal height by Global Navigation Satellite System (GNSS) observations, the accurate orthometric height can be calculated by adding precise and accurate geoid undulations model information. However, GGMs also provide data from the satellite gravity missions such as GRACE, GOCE and CHAMP. Thus, this will assist to enhance the global geoid undulations data. A statistical assessment has been made between geoid undulations derived from 4 GGMs and the airborne gravity data provided by Department of Survey and Mapping Malaysia (DSMM). The goal of this study is the selection of the best possible GGM that best matches statistically with the geoid undulations of airborne gravity data under the Marine Geodetic Infrastructures in Malaysian Waters (MAGIC) Project over marine areas in Sabah. The correlation coefficients and the RMS value for the geoid undulations of GGM and airborne gravity data were computed. The correlation coefficients between EGM 2008 and airborne gravity data is 1 while RMS value is 0.1499.In this study, the RMS value of EGM 2008 is the lowest among the others. Regarding to the statistical analysis, it clearly represents that EGM 2008 is the best fit for marine geoid undulations throughout South China Sea.
Data Processing for High School Students
ERIC Educational Resources Information Center
Spiegelberg, Emma Jo
1974-01-01
Data processing should be taught at the high school level so students may develop a general understanding and appreciation for the capabilities and the limitations of these automated data processing systems. Card machines, wiring, logic, flowcharting, and Cobol programing are to be taught, with behavioral objectives for each section listed. (SC)
Marković, Slobodan
2012-01-01
In this paper aesthetic experience is defined as an experience qualitatively different from everyday experience and similar to other exceptional states of mind. Three crucial characteristics of aesthetic experience are discussed: fascination with an aesthetic object (high arousal and attention), appraisal of the symbolic reality of an object (high cognitive engagement), and a strong feeling of unity with the object of aesthetic fascination and aesthetic appraisal. In a proposed model, two parallel levels of aesthetic information processing are proposed. On the first level two sub-levels of narrative are processed, story (theme) and symbolism (deeper meanings). The second level includes two sub-levels, perceptual associations (implicit meanings of object's physical features) and detection of compositional regularities. Two sub-levels are defined as crucial for aesthetic experience, appraisal of symbolism and compositional regularities. These sub-levels require some specific cognitive and personality dispositions, such as expertise, creative thinking, and openness to experience. Finally, feedback of emotional processing is included in our model: appraisals of everyday emotions are specified as a matter of narrative content (eg, empathy with characters), whereas the aesthetic emotion is defined as an affective evaluation in the process of symbolism appraisal or the detection of compositional regularities. PMID:23145263
Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria
2016-01-01
This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task-which tapped language comprehension and inference, and modulated sentence congruency-employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation.
Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria
2016-01-01
This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task—which tapped language comprehension and inference, and modulated sentence congruency—employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation. PMID:27014040
Thermal analysis of heat and power plant with high temperature reactor and intermediate steam cycle
NASA Astrophysics Data System (ADS)
Fic, Adam; Składzień, Jan; Gabriel, Michał
2015-03-01
Thermal analysis of a heat and power plant with a high temperature gas cooled nuclear reactor is presented. The main aim of the considered system is to supply a technological process with the heat at suitably high temperature level. The considered unit is also used to produce electricity. The high temperature helium cooled nuclear reactor is the primary heat source in the system, which consists of: the reactor cooling cycle, the steam cycle and the gas heat pump cycle. Helium used as a carrier in the first cycle (classic Brayton cycle), which includes the reactor, delivers heat in a steam generator to produce superheated steam with required parameters of the intermediate cycle. The intermediate cycle is provided to transport energy from the reactor installation to the process installation requiring a high temperature heat. The distance between reactor and the process installation is assumed short and negligable, or alternatively equal to 1 km in the analysis. The system is also equipped with a high temperature argon heat pump to obtain the temperature level of a heat carrier required by a high temperature process. Thus, the steam of the intermediate cycle supplies a lower heat exchanger of the heat pump, a process heat exchanger at the medium temperature level and a classical steam turbine system (Rankine cycle). The main purpose of the research was to evaluate the effectiveness of the system considered and to assess whether such a three cycle cogeneration system is reasonable. Multivariant calculations have been carried out employing the developed mathematical model. The results have been presented in a form of the energy efficiency and exergy efficiency of the system as a function of the temperature drop in the high temperature process heat exchanger and the reactor pressure.
An Analysis of High School Students' Performance on Five Integrated Science Process Skills
NASA Astrophysics Data System (ADS)
Beaumont-Walters, Yvonne; Soyibo, Kola
2001-02-01
This study determined Jamaican high school students' level of performance on five integrated science process skills and if there were statistically significant differences in their performance linked to their gender, grade level, school location, school type, student type and socio-economic background (SEB). The 305 subjects comprised 133 males, 172 females, 146 ninth graders, 159 10th graders, 150 traditional and 155 comprehensive high school students, 164 students from the Reform of Secondary Education (ROSE) project and 141 non-ROSE students, 166 urban and 139 rural students and 110 students from a high SEB and 195 from a low SEB. Data were collected with the authors' constructed integrated science process skills test the results indicated that the subjects' mean score was low and unsatisfactory; their performance in decreasing order was: interpreting data, recording data, generalising, formulating hypotheses and identifying variables; there were statistically significant differences in their performance based on their grade level, school type, student type, and SEB in favour of the 10th graders, traditional high school students, ROSE students and students from a high SEB. There was a positive, statistically significant and fairly strong relationship between their performance and school type, but weak relationships among their student type, grade level and SEB and performance.
A Framework for Translating a High Level Security Policy into Low Level Security Mechanisms
NASA Astrophysics Data System (ADS)
Hassan, Ahmed A.; Bahgat, Waleed M.
2010-01-01
Security policies have different components; firewall, active directory, and IDS are some examples of these components. Enforcement of network security policies to low level security mechanisms faces some essential difficulties. Consistency, verification, and maintenance are the major ones of these difficulties. One approach to overcome these difficulties is to automate the process of translation of high level security policy into low level security mechanisms. This paper introduces a framework of an automation process that translates a high level security policy into low level security mechanisms. The framework is described in terms of three phases; in the first phase all network assets are categorized according to their roles in the network security and relations between them are identified to constitute the network security model. This proposed model is based on organization based access control (OrBAC). However, the proposed model extend the OrBAC model to include not only access control policy but also some other administrative security policies like auditing policy. Besides, the proposed model enables matching of each rule of the high level security policy with the corresponding ones of the low level security policy. Through the second phase of the proposed framework, the high level security policy is mapped into the network security model. The second phase could be considered as a translation of the high level security policy into an intermediate model level. Finally, the intermediate model level is translated automatically into low level security mechanism. The paper illustrates the applicability of proposed approach through an application example.
Concept for Highly Mechanized Data Processing, Project 111.
A concept is developed for a highly mechanized maintenance data processing system capable of deriving factors, influences, and correlations to raise...the level of logistics knowledge and lead to the design of a management-control system. (Author)
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
NASA Astrophysics Data System (ADS)
Lopez, J. P.; de Almeida, A. J. F.; Tabosa, J. W. R.
2018-03-01
We report on the observation of subharmonic resonances in high-order wave mixing associated with the quantized vibrational levels of atoms trapped in a one-dimensional optical lattice created by two intense nearly counterpropagating coupling beams. These subharmonic resonances, occurring at ±1 /2 and ±1 /3 of the frequency separation between adjacent vibrational levels, are observed through phase-match angularly resolved six- and eight-wave mixing processes. We investigate how these resonances evolve with the intensity of the incident probe beam, which couples with one of the coupling beams to create anharmonic coherence gratings between adjacent vibrational levels. Our experimental results also show evidence of high-order processes associated with coherence involving nonadjacent vibrational levels. Moreover, we also demonstrate that these induced high-order coherences can be stored in the medium and the associated optical information retrieved after a controlled storage time.
Van der Molen, Melle J W; Poppelaars, Eefje S; Van Hartingsveldt, Caroline T A; Harrewijn, Anita; Gunther Moor, Bregtje; Westenberg, P Michiel
2013-01-01
Cognitive models posit that the fear of negative evaluation (FNE) is a hallmark feature of social anxiety. As such, individuals with high FNE may show biased information processing when faced with social evaluation. The aim of the current study was to examine the neural underpinnings of anticipating and processing social-evaluative feedback, and its correlates with FNE. We used a social judgment paradigm in which female participants (N = 31) were asked to indicate whether they believed to be socially accepted or rejected by their peers. Anticipatory attention was indexed by the stimulus preceding negativity (SPN), while the feedback-related negativity and P3 were used to index the processing of social-evaluative feedback. Results provided evidence of an optimism bias in social peer evaluation, as participants more often predicted to be socially accepted than rejected. Participants with high levels of FNE needed more time to provide their judgments about the social-evaluative outcome. While anticipating social-evaluative feedback, SPN amplitudes were larger for anticipated social acceptance than for social rejection feedback. Interestingly, the SPN during anticipated social acceptance was larger in participants with high levels of FNE. None of the feedback-related brain potentials correlated with the FNE. Together, the results provided evidence of biased information processing in individuals with high levels of FNE when anticipating (rather than processing) social-evaluative feedback. The delayed response times in high FNE individuals were interpreted to reflect augmented vigilance imposed by the upcoming social-evaluative threat. Possibly, the SPN constitutes a neural marker of this vigilance in females with higher FNE levels, particularly when anticipating social acceptance feedback.
Shared neural circuits for mentalizing about the self and others.
Lombardo, Michael V; Chakrabarti, Bhismadev; Bullmore, Edward T; Wheelwright, Sally J; Sadek, Susan A; Suckling, John; Baron-Cohen, Simon
2010-07-01
Although many examples exist for shared neural representations of self and other, it is unknown how such shared representations interact with the rest of the brain. Furthermore, do high-level inference-based shared mentalizing representations interact with lower level embodied/simulation-based shared representations? We used functional neuroimaging (fMRI) and a functional connectivity approach to assess these questions during high-level inference-based mentalizing. Shared mentalizing representations in ventromedial prefrontal cortex, posterior cingulate/precuneus, and temporo-parietal junction (TPJ) all exhibited identical functional connectivity patterns during mentalizing of both self and other. Connectivity patterns were distributed across low-level embodied neural systems such as the frontal operculum/ventral premotor cortex, the anterior insula, the primary sensorimotor cortex, and the presupplementary motor area. These results demonstrate that identical neural circuits are implementing processes involved in mentalizing of both self and other and that the nature of such processes may be the integration of low-level embodied processes within higher level inference-based mentalizing.
Werner, Stefan; Breus, Oksana; Symonenko, Yuri; Marillonnet, Sylvestre; Gleba, Yuri
2011-01-01
We describe here a unique ethanol-inducible process for expression of recombinant proteins in transgenic plants. The process is based on inducible release of viral RNA replicons from stably integrated DNA proreplicons. A simple treatment with ethanol releases the replicon leading to RNA amplification and high-level protein production. To achieve tight control of replicon activation and spread in the uninduced state, the viral vector has been deconstructed, and its two components, the replicon and the cell-to-cell movement protein, have each been placed separately under the control of an inducible promoter. Transgenic Nicotiana benthamiana plants incorporating this double-inducible system demonstrate negligible background expression, high (over 0.5 × 104-fold) induction multiples, and high absolute levels of protein expression upon induction (up to 4.3 mg/g fresh biomass). The process can be easily scaled up, supports expression of practically important recombinant proteins, and thus can be directly used for industrial manufacturing. PMID:21825158
Lackner, Ryan J.; Fresco, David M.
2016-01-01
Awareness of the body (i.e., interoceptive awareness) and self-referential thought represent two distinct, yet habitually integrated aspects of self. A recent neuroanatomical and processing model for depression and anxiety incorporates the connections between increased but low fidelity afferent interoceptive input with self-referential and belief-based states. A deeper understanding of how self-referential processes are integrated with interoceptive processes may ultimately aid in our understanding of altered, maladaptive views of the self – a shared experience of individuals with mood and anxiety disorders. Thus, the purpose of the current study was to examine how negative self-referential processing (i.e., brooding rumination) relates to interoception in the context of affective psychopathology. Undergraduate students (N = 82) completed an interoception task (heartbeat counting) in addition to self-reported measures of rumination and depression and anxiety symptoms. Results indicated an interaction effect of brooding rumination and interoceptive awareness on depression and anxiety-related distress. Specifically, high levels of brooding rumination coupled with low levels of interoceptive awareness were associated with the highest levels of depression and anxiety-related distress, whereas low levels of brooding rumination coupled with high levels of interoceptive awareness were associated with lower levels of depression and anxiety-related distress. The findings provide further support for the conceptualization of anxiety and depression as conditions involving the integration of interoceptive processes and negative self-referential processes. PMID:27567108
Aguilar-Mahecha, Adriana; Kuzyk, Michael A.; Domanski, Dominik; Borchers, Christoph H.; Basik, Mark
2012-01-01
Blood sample processing and handling can have a significant impact on the stability and levels of proteins measured in biomarker studies. Such pre-analytical variability needs to be well understood in the context of the different proteomics platforms available for biomarker discovery and validation. In the present study we evaluated different types of blood collection tubes including the BD P100 tube containing protease inhibitors as well as CTAD tubes, which prevent platelet activation. We studied the effect of different processing protocols as well as delays in tube processing on the levels of 55 mid and high abundance plasma proteins using novel multiple-reaction monitoring-mass spectrometry (MRM-MS) assays as well as 27 low abundance cytokines using a commercially available multiplexed bead-based immunoassay. The use of P100 tubes containing protease inhibitors only conferred proteolytic protection for 4 cytokines and only one MRM-MS-measured peptide. Mid and high abundance proteins measured by MRM are highly stable in plasma left unprocessed for up to six hours although platelet activation can also impact the levels of these proteins. The levels of cytokines were elevated when tubes were centrifuged at cold temperature, while low levels were detected when samples were collected in CTAD tubes. Delays in centrifugation also had an impact on the levels of cytokines measured depending on the type of collection tube used. Our findings can help in the development of guidelines for blood collection and processing for proteomic biomarker studies. PMID:22701622
Aguilar-Mahecha, Adriana; Kuzyk, Michael A; Domanski, Dominik; Borchers, Christoph H; Basik, Mark
2012-01-01
Blood sample processing and handling can have a significant impact on the stability and levels of proteins measured in biomarker studies. Such pre-analytical variability needs to be well understood in the context of the different proteomics platforms available for biomarker discovery and validation. In the present study we evaluated different types of blood collection tubes including the BD P100 tube containing protease inhibitors as well as CTAD tubes, which prevent platelet activation. We studied the effect of different processing protocols as well as delays in tube processing on the levels of 55 mid and high abundance plasma proteins using novel multiple-reaction monitoring-mass spectrometry (MRM-MS) assays as well as 27 low abundance cytokines using a commercially available multiplexed bead-based immunoassay. The use of P100 tubes containing protease inhibitors only conferred proteolytic protection for 4 cytokines and only one MRM-MS-measured peptide. Mid and high abundance proteins measured by MRM are highly stable in plasma left unprocessed for up to six hours although platelet activation can also impact the levels of these proteins. The levels of cytokines were elevated when tubes were centrifuged at cold temperature, while low levels were detected when samples were collected in CTAD tubes. Delays in centrifugation also had an impact on the levels of cytokines measured depending on the type of collection tube used. Our findings can help in the development of guidelines for blood collection and processing for proteomic biomarker studies.
Neural Correlates of Subliminal Language Processing.
Axelrod, Vadim; Bar, Moshe; Rees, Geraint; Yovel, Galit
2015-08-01
Language is a high-level cognitive function, so exploring the neural correlates of unconscious language processing is essential for understanding the limits of unconscious processing in general. The results of several functional magnetic resonance imaging studies have suggested that unconscious lexical and semantic processing is confined to the posterior temporal lobe, without involvement of the frontal lobe-the regions that are indispensable for conscious language processing. However, previous studies employed a similarly designed masked priming paradigm with briefly presented single and contextually unrelated words. It is thus possible, that the stimulation level was insufficiently strong to be detected in the high-level frontal regions. Here, in a high-resolution fMRI and multivariate pattern analysis study we explored the neural correlates of subliminal language processing using a novel paradigm, where written meaningful sentences were suppressed from awareness for extended duration using continuous flash suppression. We found that subjectively and objectively invisible meaningful sentences and unpronounceable nonwords could be discriminated not only in the left posterior superior temporal sulcus (STS), but critically, also in the left middle frontal gyrus. We conclude that frontal lobes play a role in unconscious language processing and that activation of the frontal lobes per se might not be sufficient for achieving conscious awareness. © The Author 2014. Published by Oxford University Press.
High-performance image processing on the desktop
NASA Astrophysics Data System (ADS)
Jordan, Stephen D.
1996-04-01
The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.
ERIC Educational Resources Information Center
National Heart, Lung, and Blood Inst. (DHHS/NIH), Bethesda, MD.
Studies have shown that high blood cholesterol levels play a role in the development of coronary heart disease in adults, and that the process leading to atherosclerosis begins in childhood. To address the problem of high cholesterol levels in children, the Panel on Blood Cholesterol Levels recommends complementary approaches for individuals and…
High level cognitive information processing in neural networks
NASA Technical Reports Server (NTRS)
Barnden, John A.; Fields, Christopher A.
1992-01-01
Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.
NASA Astrophysics Data System (ADS)
Pradels, Grégory
Considering the scientific objectives of the MICROSCOPE space mission, very weak accelerations have to be controlled and measured in orbit. Accelerometers, similar in the concept to the MICROSCOPE instrument, have already characterised the vibration environment on board a satellite at low altitude as well as the fluctuation of drag : analysis of the data provided by the CHAMP mission accelerometer have been performed. By modelling the expected acceleration signals applied on the MICROSCOPE instrument in orbit, the developed analytic model of the mission measurement has shown the interest and the requirements for the instrument calibration. Because of the on-ground seismic perturbations, the instrument cannot be calibrated in laboratory and an in-orbit procedure has to be defined. The proposed approach exploits the drag-free system of the satellite and the sensitivity of the accelerometers. Results obtained from the dedicated simulator of the mission are presented. The goal of the CNES-ESA MICROSCOPE space mission is the test of one of the most famous principle in physics, the Equivalence Principle (EP), basement of General Relativity and which fixes the universality of free fall of all bodies in same gravity field. In the establishment of new theory for Grand Unification, evidence of an EP violation may occur from 10-14 for relative ratio of inertial and gravitational mass between two different materials. The verification by experiment of this theoretical expectation becomes then fundamental. The MICROSCOPE mission is also a technological challenge of a dedicated differential accelerometer able to measure, on board a satellite, very weak accelerations acting on two proof masses made of different materials. In the case of a pure inertial orbit, this specific instrument measures the differential acceleration due to the non uniform Earth gravitational field. With the support of a Drag free system, that reduces the amplitude of the non-gravitational forces applied on the satellite, a spectral density of 10-12 m/s^2/Hz is expected in the frequency range around 10-3 Hz. Then, an accuracy of a few 10-15 m/s^2 can be reached after an integration over 1 day in presence of the 8 m/s^2 Earth gravity field, leading to the EP test with a two orders of magnitude better accuracy than the current laboratory tests. The two ultra sensitive accelerometers, used in combination to build the instrument, are derived from the one flying in the CHAMP space mission which offers for the first time a very fine measurement (10-9 m/s^2/Hz resolution) of the non-gravitational forces applied on a satellite at altitude lower than 500 km. The temporal and spectral analyses confirm the specified intrinsic parameters of the instrument as the bias, the noise level or the thermal sensitivity. A time-frequency analysis provides the first look on disturbances that might occur on this type of satellite : mechanical vibrations after thruster firings, peaks of different amplitudes due to Earth's shadow crossings or effects of the satellite thermal control. A specific and adaptive filter has been developed to reject these perturbations out of the geodesic measurements. After this treatment, the data show some very interesting behaviours as the evolution of the drag with the rotation of the orbit of the satellite. These results are of great interest for the future projects like MICROSCOPE, LISA the space gravity wave antenna developed by NASA and ESA or GOCE the ESA gradiometric solid Earth mission. The MICROSCOPE mission requires not only high resolution for the accelerometers but also fine matching of the parameters because the eventual EP violation signal is detected in the instrument output comparison. The analytic model of the mission measurement demonstrates the necessity of the evaluation of the instrument sensitivity, alignment and coupling with a minimum accuracy of 3 10-4, depending on the relative test mass position, the orbital pointing mode of the satellite, the performance of the drag-free and the attitude control system. This calibration phase is necessary to reject the common mode of the forces applied on the satellite out of the differential measurement. However, the level of the on-ground perturbations in laboratory induced by human activity and seismic noise limits the possibility of a pre- launched calibration. Then, a specific in-orbit procedure has to be defined. The proposed solution consists in exciting the satellite along or about well defined axes with the support of the Drag Free system. Taking into account the measurement range and the resolution of the differential accelerometers, the Drag Free system operation and the electrical thruster performance, the observability of all instrument parameters has been demonstrated with the required accuracy. Different tests of the method are performed with a software dedicated simulator. With an estimation of the Earth's gravitational field and the non-gravitational forces applied to the satellite along the orbit, computed by the "Observatoire de la Côte d'Azur", the validity of this calibration method has been checked for nominal conditions of the satellite operation. The introduction of disturbances like the one observed on the measurements of the CHAMP mission confirms the possibility of an in-orbit calibration with the required accuracy and with the support of the specific filter mentioned below. The MICROSCOPE space mission is of high interest for fundamental physics and exploits for the fist time the combination of a Drag Free system with very high sensitive accelerometer. This system is already selected for other scientific missions like the geodesic mission GOCE which objectives is to map the gravity gradient of the Earth with an accuracy of 4 mEötvös/Hz.
Singha, Poonam; Muthukumarappan, Kasiviswanathan; Krishnan, Padmanaban
2018-01-01
A combination of different levels of distillers dried grains processed for food application (FDDG), garbanzo flour and corn grits were chosen as a source of high-protein and high-fiber extruded snacks. A four-factor central composite rotatable design was adopted to study the effect of FDDG level, moisture content of blends, extrusion temperature, and screw speed on the apparent viscosity, mass flow rate or MFR, torque, and specific mechanical energy or SME during the extrusion process. With increase in the extrusion temperature from 100 to 140°C, apparent viscosity, specific mechanical energy, and torque value decreased. Increase in FDDG level resulted in increase in apparent viscosity, SME and torque. FDDG had no significant effect (p > .5) on mass flow rate. SME also increased with increase in the screw speed which could be due to the higher shear rates at higher screw speeds. Screw speed and moisture content had significant negative effect ( p < .05) on the torque. The apparent viscosity of dough inside the extruder and the system parameters were affected by the processing conditions. This study will be useful for control of extrusion process of blends containing these ingredients for the development of high-protein high-fiber extruded snacks.
Deng, Gui-Fang; Li, Ke; Ma, Jing; Liu, Fen; Dai, Jing-Jing; Li, Hua-Bin
2011-01-01
The level of aluminium in 178 processed food samples from Shenzhen city in China was evaluated using inductively coupled plasma-mass spectrometry. Some processed foods contained a concentration of up to 1226 mg/kg, which is about 12 times the Chinese food standard. To establish the main source in these foods, Al levels in the raw materials were determined. However, aluminium concentrations in raw materials were low (0.10-451.5 mg/kg). Therefore, aluminium levels in food additives used in these foods was determined and it was found that some food additives contained a high concentration of aluminium (0.005-57.4 g/kg). The results suggested that, in the interest of public health, food additives containing high concentrations of aluminium should be replaced by those containing less. This study has provided new information on aluminium levels in Chinese processed foods, raw materials and a selection of food additives.
Brody, Gene H; Dorsey, Shannon; Forehand, Rex; Armistead, Lisa
2002-01-01
The unique contributions that parenting processes (high levels of monitoring with a supportive, involved mother-child relationship) and classroom processes (high levels of organization, rule clarity, and student involvement) make to children's self-regulation and adjustment were examined with a sample of 277 single-parent African American families. A multi-informant design involving mothers, teachers, and 7- to 15-year-old children was used. Structural equation modeling indicated that parenting and classroom processes contributed uniquely to children's adjustment through the children's development of self-regulation. Additional analyses suggested that classroom processes can serve a protective-stabilizing function when parenting processes are compromised, and vice versa. Further research is needed to examine processes in both family and school contexts that promote child competence and resilience.
Identification of Sources of Endotoxin Exposure as Input for Effective Exposure Control Strategies.
van Duuren-Stuurman, Birgit; Gröllers-Mulderij, Mariska; van de Runstraat, Annemieke; Duisterwinkel, Anton; Terwoert, Jeroen; Spaan, Suzanne
2018-02-13
Aim of the present study is to investigate the levels of endotoxins on product samples from potatoes, onions, and seeds, representing a relevant part of the agro-food industry in the Netherlands, to gather valuable insights in possibilities for exposure control measures early in the process of industrial processing of these products. Endotoxin levels on 330 products samples from companies representing the potato, onion, and seed (processing) industry (four potato-packaging companies, five potato-processing companies, five onion-packaging companies, and four seed-processing companies) were assessed using the Limulus Amboecyte Lysate (LAL) assay. As variation in growth conditions (type of soil, growth type) and product characteristics (surface roughness, dustiness, size, species) are assumed to influence the level of endotoxin on products, different types, and growth conditions were considered when collecting the samples. Additionally, waste material, rotten products, felt material (used for drying), and process water were collected. A large variation in the endotoxin levels was found on samples of potatoes, onions, and seeds (overall geometric standard deviation 17), in the range between 0.7 EU g-1 to 16400000 EU g-1. The highest geometric mean endotoxin levels were found in plant material (319600 EU g-1), followed by soil material (49100 EU g-1) and the outer side of products (9300 EU g-1), indicating that removal of plant and soil material early in the process would be an effective exposure control strategy. The high levels of endotoxins found in the limited number of samples from rotten onions indicate that these rotten onions should also be removed early in the process. Mean endotoxin levels found in waste material (only available for seed processing) is similar to the level found in soil material, although the range is much larger. On uncleaned seeds, higher endotoxin levels were found than on cleaned seeds, indicating that cleaning processes are important control measures and also that the waste material should be handled with care. Although endotoxin levels in batches of to-be-processed potatoes, onions, and seeds vary quite dramatically, it could be concluded that rotten products, plant material, and waste material contain particularly high endotoxin levels. This information was used to propose control measures to reduce exposure to endotoxins of workers during the production process. © The Author(s) 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
Measuring and assessing maintainability at the end of high level design
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.
1993-01-01
Software architecture appears to be one of the main factors affecting software maintainability. Therefore, in order to be able to predict and assess maintainability early in the development process we need to be able to measure the high-level design characteristics that affect the change process. To this end, we propose a measurement approach, which is based on precise assumptions derived from the change process, which is based on Object-Oriented Design principles and is partially language independent. We define metrics for cohesion, coupling, and visibility in order to capture the difficulty of isolating, understanding, designing and validating changes.
NASA Astrophysics Data System (ADS)
Park, Keecheol; Oh, Kyungsuk
2017-09-01
In order to investigate the effect of leveling conditions on residual stress evolution during the leveling process of hot rolled high strength steels, the in-plane residual stresses of sheet processed under controlled conditions at skin-pass mill and levelers were measured by cutting method. The residual stress was localized near the edge of sheet. As the thickness of sheet was increased, the residual stress occurred region was expanded. The magnitude of residual stress within the sheet was reduced as increasing the deformation occurred during the leveling process. But the residual stress itself was not removed completely. The magnitude of camber occurred at cut plate was able to be predicted by the residual stress distribution. A numerical algorithm was developed for analysing the effect of leveling conditions on residual stress. It was able to implement the effect of plastic deformation in leveling, tension, work roll bending, and initial state of sheet (residual stress and curl distribution). The validity of simulated results was verified from comparison with the experimentally measured residual stress and curl in a sheet.
High pressure processing's potential to inactivate norovirus and other fooodborne viruses
USDA-ARS?s Scientific Manuscript database
High pressure processing (HPP) can inactivate human norovirus. However, all viruses are not equally susceptible to HPP. Pressure treatment parameters such as required pressure levels, initial pressurization temperatures, and pressurization times substantially affect inactivation. How food matrix ...
The moderating effect of motivation on health-related decision-making.
Berezowska, Aleksandra; Fischer, Arnout R H; Trijp, Hans C M van
2017-06-01
This study identifies how autonomous and controlled motivation moderates the cognitive process that drives the adoption of personalised nutrition services. The cognitive process comprises perceptions of privacy risk, personalisation benefit, and their determinants. Depending on their level of autonomous and controlled motivation, participants (N = 3453) were assigned to one of four motivational orientations, which resulted in a 2 (low/high autonomous motivation) × 2 (low/high controlled motivation) quasi-experimental design. High levels of autonomous motivation strengthened the extent to which: (1) the benefits of engaging with a service determined the outcome of a risk-benefit trade-off; (2) the effectiveness of a service determined benefit perceptions. High levels of controlled motivation influenced the extent to which: (1) the risk of privacy loss determined the outcome of a risk-benefit trade-off; (2) controlling personal information after disclosure and perceiving the disclosed personal information as sensitive determined the risk of potential privacy loss. To encourage the adoption of personalised dietary recommendations, for individuals with high levels of autonomous motivation emphasis should be on benefits and its determinants. For those with high levels of controlled motivation, it is important to focus on risk-related issues such as information sensitivity.
Hybrid photonic signal processing
NASA Astrophysics Data System (ADS)
Ghauri, Farzan Naseer
This thesis proposes research of novel hybrid photonic signal processing systems in the areas of optical communications, test and measurement, RF signal processing and extreme environment optical sensors. It will be shown that use of innovative hybrid techniques allows design of photonic signal processing systems with superior performance parameters and enhanced capabilities. These applications can be divided into domains of analog-digital hybrid signal processing applications and free-space---fiber-coupled hybrid optical sensors. The analog-digital hybrid signal processing applications include a high-performance analog-digital hybrid MEMS variable optical attenuator that can simultaneously provide high dynamic range as well as high resolution attenuation controls; an analog-digital hybrid MEMS beam profiler that allows high-power watt-level laser beam profiling and also provides both submicron-level high resolution and wide area profiling coverage; and all optical transversal RF filters that operate on the principle of broadband optical spectral control using MEMS and/or Acousto-Optic tunable Filters (AOTF) devices which can provide continuous, digital or hybrid signal time delay and weight selection. The hybrid optical sensors presented in the thesis are extreme environment pressure sensors and dual temperature-pressure sensors. The sensors employ hybrid free-space and fiber-coupled techniques for remotely monitoring a system under simultaneous extremely high temperatures and pressures.
A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS
We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...
Ahmed, Lubna; de Fockert, Jan W
2012-10-01
Selective attention to relevant targets has been shown to depend on the availability of working memory (WM). Under conditions of high WM load, processing of irrelevant distractors is enhanced. Here we showed that this detrimental effect of WM load on selective attention efficiency is reversed when the task requires global- rather than local-level processing. Participants were asked to attend to either the local or the global level of a hierarchical Navon stimulus while keeping either a low or a high load in WM. In line with previous findings, during attention to the local level, distractors at the global level produced more interference under high than under low WM load. By contrast, loading WM had the opposite effect of improving selective attention during attention to the global level. The findings demonstrate that the impact of WM load on selective attention is not invariant, but rather is dependent on the level of the to-be-attended information.
Preliminary technical data summary No. 3 for the Defense Waste Processing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landon, L.F.
1980-05-01
This document presents an update on the best information presently available for the purpose of establishing the basis for the design of a Defense Waste Processing Facility. Objective of this project is to provide a facility to fix the radionuclides present in Savannah River Plant (SRP) high-level liquid waste in a high-integrity form (glass). Flowsheets and material balances reflect the alternate CAB case including the incorporation of low-level supernate in concrete. (DLC)
Anger Assessment in Rural High School Students
ERIC Educational Resources Information Center
Lamb, Jacqueline M.; Puskar, Kathryn R.; Sereika, Susan; Patterson, Kathy; Kaufmann, Judith A.
2003-01-01
Anger and aggression in school children are a major concern in American society today. Students with high anger levels and poor cognitive processing skills are at risk for poor relationships, underachievement in school, and health problems. This article describes characteristics of children who are at risk for high anger levels and aggression as…
NASA Astrophysics Data System (ADS)
Benkrid, K.; Belkacemi, S.; Sukhsawas, S.
2005-06-01
This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.
Characteristics of Friction Stir Processed UHMW Polyethylene Based Composite
NASA Astrophysics Data System (ADS)
Hussain, G.; Khan, I.
2018-01-01
Ultra-high molecular weight polyethylene (UHMWPE) based composites are widely used in biomedical and food industries because of their biocompatibility and enhanced properties. The aim of this study was to fabricate UHMWPE / nHA composite through heat assisted Friction Stir Processing. The rotational speed (ω), feed rate (f), volume fraction of nHA (v) and shoulder temperature (T) were selected as the process parameters. Macroscopic and microscopic analysis revealed that these parameters have significant effects on the distribution of reinforcing material, defects formation and material mixing. Defects were observed especially at low levels of (ω, T) and high levels of (f, v). Low level of v with medium levels of other parameters resulted in better mixing and minimum defects. A 10% increase in strength with only 1% reduction in Percent Elongation was observed at the above set of conditions. Moreover, the resulted hardness of the composite was higher than that of the parent material.
Neural correlates of depth of strategic reasoning in medial prefrontal cortex
Coricelli, Giorgio; Nagel, Rosemarie
2009-01-01
We used functional MRI (fMRI) to investigate human mental processes in a competitive interactive setting—the “beauty contest” game. This game is well-suited for investigating whether and how a player's mental processing incorporates the thinking process of others in strategic reasoning. We apply a cognitive hierarchy model to classify subject's choices in the experimental game according to the degree of strategic reasoning so that we can identify the neural substrates of different levels of strategizing. According to this model, high-level reasoners expect the others to behave strategically, whereas low-level reasoners choose based on the expectation that others will choose randomly. The data show that high-level reasoning and a measure of strategic IQ (related to winning in the game) correlate with the neural activity in the medial prefrontal cortex, demonstrating its crucial role in successful mentalizing. This supports a cognitive hierarchy model of human brain and behavior. PMID:19470476
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-09-01
The U.S. Department of Energy (DOE) is considering the selection of a strategy for the long-term management of the defense high-level wastes at the Idaho Chemical Processing Plant (ICPP). This report describes the environmental impacts of alternative strategies. These alternative strategies include leaving the calcine in its present form at the Idaho National Engineering Laboratory (INEL), or retrieving and modifying the calcine to a more durable waste form and disposing of it either at the INEL or in an offsite repository. This report addresses only the alternatives for a program to manage the high-level waste generated at the ICPP. 24more » figures, 60 tables.« less
Van der Molen, Melle J. W.; Poppelaars, Eefje S.; Van Hartingsveldt, Caroline T. A.; Harrewijn, Anita; Gunther Moor, Bregtje; Westenberg, P. Michiel
2014-01-01
Cognitive models posit that the fear of negative evaluation (FNE) is a hallmark feature of social anxiety. As such, individuals with high FNE may show biased information processing when faced with social evaluation. The aim of the current study was to examine the neural underpinnings of anticipating and processing social-evaluative feedback, and its correlates with FNE. We used a social judgment paradigm in which female participants (N = 31) were asked to indicate whether they believed to be socially accepted or rejected by their peers. Anticipatory attention was indexed by the stimulus preceding negativity (SPN), while the feedback-related negativity and P3 were used to index the processing of social-evaluative feedback. Results provided evidence of an optimism bias in social peer evaluation, as participants more often predicted to be socially accepted than rejected. Participants with high levels of FNE needed more time to provide their judgments about the social-evaluative outcome. While anticipating social-evaluative feedback, SPN amplitudes were larger for anticipated social acceptance than for social rejection feedback. Interestingly, the SPN during anticipated social acceptance was larger in participants with high levels of FNE. None of the feedback-related brain potentials correlated with the FNE. Together, the results provided evidence of biased information processing in individuals with high levels of FNE when anticipating (rather than processing) social-evaluative feedback. The delayed response times in high FNE individuals were interpreted to reflect augmented vigilance imposed by the upcoming social-evaluative threat. Possibly, the SPN constitutes a neural marker of this vigilance in females with higher FNE levels, particularly when anticipating social acceptance feedback. PMID:24478667
Holmberg, Leif
2007-11-01
A health-care organization simultaneously belongs to two different institutional value patterns: a professional and an administrative value pattern. At the administrative level, medical problem-solving processes are generally perceived as the efficient application of familiar chains of activities to well-defined problems; and a low task uncertainty is therefore assumed at the work-floor level. This assumption is further reinforced through clinical pathways and other administrative guidelines. However, studies have shown that in clinical practice such administrative guidelines are often considered inadequate and difficult to implement mainly because physicians generally perceive task uncertainty to be high and that the guidelines do not cover the scope of encountered deviations. The current administrative level guidelines impose uniform structural features that meet the requirement for low task uncertainty. Within these structural constraints, physicians must organize medical problem-solving processes to meet any task uncertainty that may be encountered. Medical problem-solving processes with low task uncertainty need to be organized independently of processes with high task uncertainty. Each process must be evaluated according to different performance standards and needs to have autonomous administrative guideline models. Although clinical pathways seem appropriate when there is low task uncertainty, other kinds of guidelines are required when the task uncertainty is high.
Attention to Hierarchical Level Influences Attentional Selection of Spatial Scale
ERIC Educational Resources Information Center
Flevaris, Anastasia V.; Bentin, Shlomo; Robertson, Lynn C.
2011-01-01
Ample evidence suggests that global perception may involve low spatial frequency (LSF) processing and that local perception may involve high spatial frequency (HSF) processing (Shulman, Sullivan, Gish, & Sakoda, 1986; Shulman & Wilson, 1987; Robertson, 1996). It is debated whether SF selection is a low-level mechanism associating global…
ERIC Educational Resources Information Center
Hayward, Dana A.; Shore, David I.; Ristic, Jelena; Kovshoff, Hanna; Iarocci, Grace; Mottron, Laurent; Burack, Jacob A.
2012-01-01
We utilized a hierarchical figures task to determine the default level of perceptual processing and the flexibility of visual processing in a group of high-functioning young adults with autism (n = 12) and a typically developing young adults, matched by chronological age and IQ (n = 12). In one task, participants attended to one level of the…
ERIC Educational Resources Information Center
Krebs, Saskia Susanne; Roebers, Claudia Maria
2012-01-01
This multi-phase study examined the influence of retrieval processes on children's metacognitive processes in relation to and in interaction with achievement level and age. First, N = 150 9/10- and 11/12-year old high and low achievers watched an educational film and predicted their test performance. Children then solved a cloze test regarding the…
High-powered CO2 -lasers and noise control
NASA Astrophysics Data System (ADS)
Honkasalo, Antero; Kuronen, Juhani
High-power CO2 -lasers are being more and more widely used for welding, drilling and cutting in machine shops. In the near future, different kinds of surface treatments will also become routine practice with laser units. The industries benefitting most from high power lasers will be: the automotive industry, shipbuilding, the offshore industry, the aerospace industry, the nuclear and the chemical processing industries. Metal processing lasers are interesting from the point of view of noise control because the working tool is a laser beam. It is reasonable to suppose that the use of such laser beams will lead to lower noise levels than those connected with traditional metal processing methods and equipment. In the following presentation, the noise levels and possible noise-control problems attached to the use of high-powered CO2 -lasers are studied.
Theoretical approaches to lightness and perception.
Gilchrist, Alan
2015-01-01
Theories of lightness, like theories of perception in general, can be categorized as high-level, low-level, and mid-level. However, I will argue that in practice there are only two categories: one-stage mid-level theories, and two-stage low-high theories. Low-level theories usually include a high-level component and high-level theories include a low-level component, the distinction being mainly one of emphasis. Two-stage theories are the modern incarnation of the persistent sensation/perception dichotomy according to which an early experience of raw sensations, faithful to the proximal stimulus, is followed by a process of cognitive interpretation, typically based on past experience. Like phlogiston or the ether, raw sensations seem like they must exist, but there is no clear evidence for them. Proximal stimulus matches are postperceptual, not read off an early sensory stage. Visual angle matches are achieved by a cognitive process of flattening the visual world. Likewise, brightness (luminance) matches depend on a cognitive process of flattening the illumination. Brightness is not the input to lightness; brightness is slower than lightness. Evidence for an early (< 200 ms) mosaic stage is shaky. As for cognitive influences on perception, the many claims tend to fall apart upon close inspection of the evidence. Much of the evidence for the current revival of the 'new look' is probably better explained by (1) a natural desire of (some) subjects to please the experimenter, and (2) the ease of intuiting an experimental hypothesis. High-level theories of lightness are overkill. The visual system does not need to know the amount of illumination, merely which surfaces share the same illumination. This leaves mid-level theories derived from the gestalt school. Here the debate seems to revolve around layer models and framework models. Layer models fit our visual experience of a pattern of illumination projected onto a pattern of reflectance, while framework models provide a better account of illusions and failures of constancy. Evidence for and against these approaches is reviewed.
Lackner, Ryan J; Fresco, David M
2016-10-01
Awareness of the body (i.e., interoceptive awareness) and self-referential thought represent two distinct, yet habitually integrated aspects of self. A recent neuroanatomical and processing model for depression and anxiety incorporates the connections between increased but low fidelity afferent interoceptive input with self-referential and belief-based states. A deeper understanding of how self-referential processes are integrated with interoceptive processes may ultimately aid in our understanding of altered, maladaptive views of the self - a shared experience of individuals with mood and anxiety disorders. Thus, the purpose of the current study was to examine how negative self-referential processing (i.e., brooding rumination) relates to interoception in the context of affective psychopathology. Undergraduate students (N = 82) completed an interoception task (heartbeat counting) in addition to self-reported measures of rumination and depression and anxiety symptoms. Results indicated an interaction effect of brooding rumination and interoceptive awareness on depression and anxiety-related distress. Specifically, high levels of brooding rumination coupled with low levels of interoceptive awareness were associated with the highest levels of depression and anxiety-related distress, whereas low levels of brooding rumination coupled with high levels of interoceptive awareness were associated with lower levels of depression and anxiety-related distress. The findings provide further support for the conceptualization of anxiety and depression as conditions involving the integration of interoceptive processes and negative self-referential processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Friction stir processing on high carbon steel U12
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarasov, S. Yu., E-mail: tsy@ispms.ru; Rubtsov, V. E., E-mail: rvy@ispms.ru; National Research Tomsk Polytechnic University, Tomsk, 634050
2015-10-27
Friction stir processing (FSP) of high carbon steel (U12) samples has been carried out using a milling machine and tools made of cemented tungsten carbide. The FSP tool has been made in the shape of 5×5×1.5 mm. The microstructural characterization of obtained stir zone and heat affected zone has been carried out. Microhardness at the level of 700 MPa has been obtained in the stir zone with microstructure consisting of large grains and cementitte network. This high-level of microhardness is explained by bainitic reaction developing from decarburization of austenitic grains during cementite network formation.
West Valley demonstration project: Alternative processes for solidifying the high-level wastes
NASA Astrophysics Data System (ADS)
Holton, L. K.; Larson, D. E.; Partain, W. L.; Treat, R. L.
1981-10-01
Two pretreatment approaches and several waste form processes for radioactive wastes were selected for evaluation. The two waste treatment approaches were the salt/sludge separation process and the combined waste process. Both terminal and interim waste form processes were studied.
EEG oscillations entrain their phase to high-level features of speech sound.
Zoefel, Benedikt; VanRullen, Rufin
2016-01-01
Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech. Copyright © 2015 Elsevier Inc. All rights reserved.
SIMULANT DEVELOPMENT FOR SAVANNAH RIVER SITE HIGH LEVEL WASTE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, M; Russell Eibling, R; David Koopman, D
2007-09-04
The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies High Level Waste (HLW) for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. The HLW consists of insoluble metal hydroxides (primarily iron, aluminum, magnesium, manganese, and uranium) and soluble sodium salts (carbonate, hydroxide, nitrite, nitrate, and sulfate). The HLW is processed in large batches through DWPF; DWPF has recently completed processing Sludge Batch 3 (SB3) and is currently processing Sludge Batch 4 (SB4). The composition of metal species in SB4 is shown in Table 1 as a function of the ratiomore » of a metal to iron. Simulants remove radioactive species and renormalize the remaining species. Supernate composition is shown in Table 2.« less
Toward large-area roll-to-roll printed nanophotonic sensors
NASA Astrophysics Data System (ADS)
Karioja, Pentti; Hiltunen, Jussi; Aikio, Sanna M.; Alajoki, Teemu; Tuominen, Jarkko; Hiltunen, Marianne; Siitonen, Samuli; Kontturi, Ville; Böhlen, Karl; Hauser, Rene; Charlton, Martin; Boersma, Arjen; Lieberzeit, Peter; Felder, Thorsten; Eustace, David; Haskal, Eliav
2014-05-01
Polymers have become an important material group in fabricating discrete photonic components and integrated optical devices. This is due to their good properties: high optical transmittance, versatile processability at relative low temperatures and potential for low-cost production. Recently, nanoimprinting or nanoimprint lithography (NIL) has obtained a plenty of research interest. In NIL, a mould is pressed against a substrate coated with a moldable material. After deformation of the material, the mold is separated and a replica of the mold is formed. Compared with conventional lithographic methods, imprinting is simple to carry out, requires less-complicated equipment and can provide high-resolution with high throughput. Nanoimprint lithography has shown potential to become a method for low-cost and high-throughput fabrication of nanostructures. We show the development process of nano-structured, large-area multi-parameter sensors using Photonic Crystal (PC) and Surface Enhanced Raman Scattering (SERS) methodologies for environmental and pharmaceutical applications. We address these challenges by developing roll-to-roll (R2R) UV-nanoimprint fabrication methods. Our development steps are the following: Firstly, the proof of concept structures are fabricated by the use of wafer-level processes in Si-based materials. Secondly, the master molds of successful designs are fabricated, and they are used to transfer the nanophotonic structures into polymer materials using sheet-level UV-nanoimprinting. Thirdly, the sheet-level nanoimprinting processes are transferred to roll-to-roll fabrication. In order to enhance roll-to-roll manufacturing capabilities, silicone-based polymer material development was carried out. In the different development phases, Photonic Crystal and SERS sensor structures with increasing complexities were fabricated using polymer materials in order to enhance sheet-level and roll-to-roll manufacturing processes. In addition, chemical and molecular imprint (MIP) functionalization methods were applied in the sensor demonstrators. In this paper, the process flow in fabricating large-area nanophotonic structures by the use of sheet-level and roll-to-roll UV- nanoimprinting is reported.
The Solid Phase Curing Time Effect of Asbuton with Texapon Emulsifier at the Optimum Bitumen Content
NASA Astrophysics Data System (ADS)
Sarwono, D.; Surya D, R.; Setyawan, A.; Djumari
2017-07-01
Buton asphalt (asbuton) could not be utilized optimally in Indonesia. Asbuton utilization rate was still low because the processed product of asbuton still have impracticable form in the term of use and also requiring high processing costs. This research aimed to obtain asphalt products from asbuton practical for be used through the extraction process and not requiring expensive processing cost. This research was done with experimental method in laboratory. The composition of emulsify asbuton were 5/20 grain, premium, texapon, HCl, and aquades. Solid phase was the mixture asbuton 5/20 grain and premium with 3 minutes mixing time. Liquid phase consisted texapon, HCl and aquades. The aging process was done after solid phase mixing process in order to reaction and tie of solid phase mixed become more optimal for high solubility level of asphalt production. Aging variable time were 30, 60, 90, 120, and 150 minutes. Solid and liquid phase was mixed for emulsify asbuton production, then extracted for 25 minutes. Solubility level of asphalt, water level, and asphalt characteristic was tested at extraction result of emulsify asbuton with most optimum ashphal level. The result of analysis tested data asphalt solubility level at extract asbuton resulted 94.77% on 120 minutes aging variable time. Water level test resulted water content reduction on emulsify asbuton more long time on occurring of aging solid phase. Examination of asphalt characteristic at extraction result of emulsify asbuton with optimum asphalt solubility level, obtain specimen that have rigid and strong texture in order that examination result have not sufficient ductility and penetration value.
Etchepare, Aurore; Prouteau, Antoinette
2018-04-01
Social cognition has received growing interest in many conditions in recent years. However, this construct still suffers from a considerable lack of consensus, especially regarding the dimensions to be studied and the resulting methodology of clinical assessment. Our review aims to clarify the distinctiveness of the dimensions of social cognition. Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements, a systematic review was conducted to explore the factor structure of social cognition in the adult general and clinical populations. The initial search provided 441 articles published between January 1982 and March 2017. Eleven studies were included, all conducted in psychiatric populations and/or healthy participants. Most studies were in favor of a two-factor solution. Four studies drew a distinction between low-level (e.g., facial emotion/prosody recognition) and high-level (e.g., theory of mind) information processing. Four others reported a distinction between affective (e.g., facial emotion/prosody recognition) and cognitive (e.g., false beliefs) information processing. Interestingly, attributional style was frequently reported as an additional separate factor of social cognition. Results of factor analyses add further support for the relevance of models differentiating level of information processing (low- vs. high-level) from nature of processed information (affective vs. cognitive). These results add to a significant body of empirical evidence from developmental, clinical research and neuroimaging studies. We argue the relevance of integrating low- versus high-level processing with affective and cognitive processing in a two-dimensional model of social cognition that would be useful for future research and clinical practice. (JINS, 2018, 24, 391-404).
García-Capdevila, Sílvia; Portell-Cortés, Isabel; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Costa-Miserachs, David
2009-09-14
The effect of long-term voluntary exercise (running wheel) on anxiety-like behaviour (plus maze and open field) and learning and memory processes (object recognition and two-way active avoidance) was examined on Wistar rats. Because major individual differences in running wheel behaviour were observed, the data were analysed considering the exercising animals both as a whole and grouped according to the time spent in the running wheel (low, high, and very-high running). Although some variables related to anxiety-like behaviour seem to reflect an anxiogenic compatible effect, the view of the complete set of variables could be interpreted as an enhancement of defensive and risk assessment behaviours in exercised animals, without major differences depending on the exercise level. Effects on learning and memory processes were dependent on task and level of exercise. Two-way avoidance was not affected either in the acquisition or in the retention session, while the retention of object recognition task was affected. In this latter task, an enhancement in low running subjects and impairment in high and very-high running animals were observed.
Rachiplusia nu larva as a biofactory to achieve high level expression of horseradish peroxidase.
Romero, Lucía Virginia; Targovnik, Alexandra Marisa; Wolman, Federico Javier; Cascone, Osvaldo; Miranda, María Victoria
2011-05-01
A process based on orally-infected Rachiplusia nu larvae as biological factories for expression and one-step purification of horseradish peroxidase isozyme C (HRP-C) is described. The process allows obtaining high levels of pure HRP-C by membrane chromatography purification. The introduction of the partial polyhedrin homology sequence element in the target gene increased HRP-C expression level by 2.8-fold whereas it increased 1.8-fold when the larvae were reared at 27 °C instead of at 24 °C, summing up a 4.6-fold overall increase in the expression level. Additionally, HRP-C purification by membrane chromatography at a high flow rate greatly increase D the productivity without affecting the resolution. The V(max) and K(m) values of the recombinant HRP-C were similar to those of the HRP from Armoracia rusticana roots. © Springer Science+Business Media B.V. 2011
Intelligent Processing Equipment Research Supported by the National Science Foundation
NASA Technical Reports Server (NTRS)
Rao, Suren B.
1992-01-01
The research in progress on processes, workstations, and systems has the goal of developing a high level of understanding of the issues involved. This will enable the incorporation of a level of intelligence that will allow the creation of autonomous manufacturing systems that operate in an optimum manner, under a wide range of conditions. The emphasis of the research has been on the development of highly productive and flexible techniques to address current and future problems in manufacturing and processing. Several of these projects have resulted in well-defined and established models that can now be implemented in the application arena in the next few years.
Geers, Ann E; Hayes, Heather
2011-02-01
This study had three goals: (1) to document the literacy skills of deaf adolescents who received cochlear implants (CIs) as preschoolers; (2) to examine reading growth from elementary grades to high school; (3) to assess the contribution of early literacy levels and phonological processing skills, among other factors, to literacy levels in high school. A battery of reading, spelling, expository writing, and phonological processing assessments were administered to 112 high school (CI-HS) students, ages 15.5 to 18.5 yrs, who had participated in a reading assessment battery in early elementary grades (CI-E), ages 8.0 to 9.9 yrs. The CI-HS students' performance was compared with either a control group of hearing peers (N = 46) or hearing norms provided by the assessment developer. Many of the CI-HS students (47 to 66%) performed within or above the average range for hearing peers on reading tests. When compared with their CI-E performance, good early readers were also good readers in high school. Importantly, the majority of CI-HS students maintained their reading levels over time compared with hearing peers, indicating that the gap in performance was, at the very least, not widening for most students. Written expression and phonological processing tasks posed a great deal of difficulty for the CI-HS students. They were poorer spellers, poorer expository writers, and displayed poorer phonological knowledge than hearing age-mates. Phonological processing skills were a critical predictor of high school literacy skills (reading, spelling, and expository writing), accounting for 39% of variance remaining after controlling for child, family, and implant characteristics. Many children who receive CIs as preschoolers achieve age-appropriate literacy levels as adolescents. However, significant delays in spelling and written expression are evident compared with hearing peers. For children with CIs, the development of phonological processing skills is not just important for early reading skills, such as decoding, but is critical for later literacy success as well.
NASA Astrophysics Data System (ADS)
Huang, J. C.; Wright, W. V.
1982-04-01
The Defense Waste Processing Facility (DWPF) for immobilizing nuclear high level waste (HLW) is scheduled to be built. High level waste is produced when reactor components are subjected to chemical separation operations. Two candidates for immobilizing this HLW are borosilicate glass and crystalline ceramic, either being contained in weld sealed stainless steel canisters. A number of technical analyses are being conducted to support a selection between these two waste forms. The risks associated with the manufacture and interim storage of these two forms in the DWPF are compared. Process information used in the risk analysis was taken primarily from a DWPF processibility analysis. The DWPF environmental analysis provided much of the necessary environmental information.
ERIC Educational Resources Information Center
Koolen, Sophieke; Vissers, Constance Th. W. M.; Hendriks, Angelique W. C. J.; Egger, Jos I. M.; Verhoeven, Ludo
2012-01-01
This study examined the hypothesis of an atypical interaction between attention and language in ASD. A dual-task experiment with three conditions was designed, in which sentences were presented that contained errors requiring attentional focus either at (a) low level, or (b) high level, or (c) both levels of language. Speed and accuracy for error…
Beigneux, Anne P.; Davies, Brandon S. J.; Gin, Peter; Weinstein, Michael M.; Farber, Emily; Qiao, Xin; Peale, Franklin; Bunting, Stuart; Walzem, Rosemary L.; Wong, Jinny S.; Blaner, William S.; Ding, Zhi-Ming; Melford, Kristan; Wongsiriroj, Nuttaporn; Shu, Xiao; de Sauvage, Fred; Ryan, Robert O.; Fong, Loren G.; Bensadoun, André; Young, Stephen G.
2007-01-01
Summary The triglycerides in chylomicrons are hydrolyzed by lipoprotein lipase (LpL) along the luminal surface of the capillaries. However, the endothelial cell molecule that facilitates chylomicron processing by LpL has not yet been defined. Here, we show that glycosylphosphatidylinositol-anchored high density lipoprotein–binding protein 1 (GPIHBP1) plays a critical role in the lipolytic processing of chylomicrons. Gpihbp1-deficient mice exhibit a striking accumulation of chylomicrons in the plasma, even on a low-fat diet, resulting in milky plasma and plasma triglyceride levels as high as 5,000 mg/dl. Normally, Gpihbp1 is expressed highly in heart and adipose tissue, the same tissues that express high levels of LpL. In these tissues, GPIHBP1 is located on the luminal face of the capillary endothelium. Expression of GPIHBP1 in cultured cells confers the ability to bind both LpL and chylomicrons. These studies strongly suggest that GPIHBP1 is an important platform for the LpL-mediated processing of chylomicrons in capillaries. PMID:17403372
Process for measuring low cadmium levels in blood and other biological specimens
Peterson, David P.; Huff, Edmund A.; Bhattacharyya, Maryka H.
1994-01-01
A process for measuring low levels of cadmium in blood and other biological specimens is provided without interference from high levels of alkali metal contaminants by forming an aqueous solution and without contamination by environmental cadmium absent the proteins from the specimen, selectively removing cadmium from the aqueous solution on an anion exchange resin, thereby removing the alkali metal contaminants, resolubilizing cadmium from the resin to form a second solution and analyzing the second solution for cadmium, the process being carried out in a cadmium-free environment.
Process for measuring low cadmium levels in blood and other biological specimens
Peterson, David P.; Huff, Edmund A.; Bhattacharyya, Maryka H.
1994-05-03
A process for measuring low levels of cadmium in blood and other biological specimens is provided without interference from high levels of alkali metal contaminants by forming an aqueous solution and without contamination by environmental cadmium absent the proteins from the specimen, selectively removing cadmium from the aqueous solution on an anion exchange resin, thereby removing the alkali metal contaminants, resolubilizing cadmium from the resin to form a second solution and analyzing the second solution for cadmium, the process being carried out in a cadmium-free environment.
NASA Astrophysics Data System (ADS)
Douch, Karim; Panet, Isabelle; Foulon, Bernard; Christophe, Bruno; Pajot-Métivier, Gwendoline; Diament, Michel
2014-05-01
Satellite missions such as CHAMP, GRACE and GOCE have led to an unprecedented improvement of global gravity field models during the past decade. However, for many applications these global models are not sufficiently accurate when dealing with wavelengths shorter than 100 km. This is all the more true in areas where gravity data are scarce and uneven as for instance in the poorly covered land-sea transition area. We suggest here, in line with spatial gravity gradiometry, airborne gravity gradiometry as a convenient way to amplify the sensitivity to short wavelengths and to cover homogeneously coastal region. Moreover, the directionality of the gravity gradients gives new information on the geometry of the gravity field and therefore of the causative bodies. In this respect, we analyze here the performances of a new airborne electrostatic acceleration gradiometer, GREMLIT, which permits along with ancillary measurements to determine the horizontal gradients of the horizontal components of the gravitational field in the instrumental frame. GREMLIT is composed of a compact assembly of 4 planar electrostatic accelerometers inheriting from technologies developed by ONERA for spatial accelerometers. After an overview of the functionals of the gravity field that are of interest for coastal oceanography, passive navigation and hydrocarbon exploration, we present the corresponding required precision and resolution. Then, we investigate the influence of the different parameters of the survey, such as altitude or cross-track distance, on the resolution and precision of the final measurements. To do so, we design numerical simulations of airborne survey performed with GREMLIT and compute the total error budget on the gravity gradients. Based on this error analysis, we infer by a method of error propagation the uncertainty on the different functionals of the gravity potential used for each application. This finally enables us to conclude on the requirements for a high resolution mapping of the gravity field in coastal areas.
GRAV-D for Puerto Rico and the U.S. Virgin Islands
NASA Astrophysics Data System (ADS)
Roman, D. R.; Li, X.; Smith, D. A.; Geoid; GRAV-D Teams
2013-05-01
NOAA's National Geodetic Survey began the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) program in an effort to modernize and unify vertical datums in all states and territories. As a part of this program, NGS collected aerogravity profiles over the islands of Puerto Rico and the U.S. Virgin Islands in January 2009. A Citation II aircraft was equipped with an airborne gravimeter, GPS receiver, and a GPS/Inertial unit. Absolute gravity and GPS ties were made to multiple ground sites to ensure consistency in the results. The main survey covered a region of approximately 400 km by 500 km with flight altitudes of 10,668 m (35,000ft) and with 10 km track spacing. Cross-track profiles at 40 km spacing were also collected to establish an accuracy of 1.34 mGals RMSE. In addition to the high altitude flights, two more flights were made primarily over terrestrial areas at 1,524 m (5,000 ft) to obtain higher resolution information in these regions. There were no cross-ties established for these lower altitude flights. Additionally, terrestrial surveys were also conducted to better tie ground sites and to serve as control for later analysis for available but older terrestrial and marine gravity data in the region already held by NGS. The aerogravity data were analyzed and at least internally compared to obtain the optimal results before being published on the web. In this study, the aerogravity data were compared to available global gravity models derived from satellite missions (GRACE & GOCE) to evaluate their long wavelength character (e.g., potential biases and trends). The vetted satellite-aerogravity data were then combined and used to evaluate surface data (terrestrial and marine) in the region to remove any potential systematic effects. Finally, all these data were combined into a gravimetric geoid height model and evaluated with an eye to eventual use as a GNSS-accessed vertical datum.
"Assessment Drives Learning": Do Assessments Promote High-Level Cognitive Processing?
ERIC Educational Resources Information Center
Bezuidenhout, M. J.; Alt, H.
2011-01-01
Students tend to learn in the way they know, or think, they will be assessed. Therefore, to ensure deep, meaningful learning, assessments must be geared to promote cognitive processing that requires complex, contextualised thinking to construct meaning and create knowledge. Bloom's taxonomy of cognitive levels is used worldwide to assist in…
NASA Astrophysics Data System (ADS)
Witantyo; Setyawan, David
2018-03-01
In a lead acid battery industry, grid casting is a process that has high defect and thickness variation level. DMAIC (Define-Measure-Analyse-Improve-Control) method and its tools will be used to improve the casting process. In the Define stage, it is used project charter and SIPOC (Supplier Input Process Output Customer) method to map the existent problem. In the Measure stage, it is conducted a data retrieval related to the types of defect and the amount of it, also the grid thickness variation that happened. And then the retrieved data is processed and analyzed by using 5 Why’s and FMEA method. In the Analyze stage, it is conducted a grid observation that experience fragile and crack type of defect by using microscope showing the amount of oxide Pb inclusion in the grid. Analysis that is used in grid casting process shows the difference of temperature that is too high between the metal fluid and mold temperature, also the corking process that doesn’t have standard. The Improve stage is conducted a fixing process which generates the reduction of grid variation thickness level and defect/unit level from 9,184% to 0,492%. In Control stage, it is conducted a new working standard determination and already fixed control process.
Effective low-level processing for interferometric image enhancement
NASA Astrophysics Data System (ADS)
Joo, Wonjong; Cha, Soyoung S.
1995-09-01
The hybrid operation of digital image processing and a knowledge-based AI system has been recognized as a desirable approach of the automated evaluation of noise-ridden interferogram. Early noise/data reduction before phase is extracted is essential for the success of the knowledge- based processing. In this paper, new concepts of effective, interactive low-level processing operators: that is, a background-matched filter and a directional-smoothing filter, are developed and tested with transonic aerodynamic interferograms. The results indicate that these new operators have promising advantages in noise/data reduction over the conventional ones, leading success of the high-level, intelligent phase extraction.
Stevenson, Ryan A; Sun, Sol Z; Hazlett, Naomi; Cant, Jonathan S; Barense, Morgan D; Ferber, Susanne
2018-04-01
Atypical sensory perception is one of the most ubiquitous symptoms of autism, including a tendency towards a local-processing bias. We investigated whether local-processing biases were associated with global-processing impairments on a global/local attentional-scope paradigm in conjunction with a composite-face task. Behavioural results were related to individuals' levels of autistic traits, specifically the Attention to Detail subscale of the Autism Quotient, and the Sensory Profile Questionnaire. Individuals showing high rates of Attention to Detail were more susceptible to global attentional-scope manipulations, suggesting that local-processing biases associated with Attention to Detail do not come at the cost of a global-processing deficit, but reflect a difference in default global versus local bias. This relationship operated at the attentional/perceptual level, but not response criterion.
Denys, S; Van Loey, A M; Hendrickx, M E
2000-01-01
A numerical heat transfer model for predicting product temperature profiles during high-pressure thawing processes was recently proposed by the authors. In the present work, the predictive capacity of the model was considerably improved by taking into account the pressure dependence of the latent heat of the product that was used (Tylose). The effect of pressure on the latent heat of Tylose was experimentally determined by a series of freezing experiments conducted at different pressure levels. By combining a numerical heat transfer model for freezing processes with a least sum of squares optimization procedure, the corresponding latent heat at each pressure level was estimated, and the obtained pressure relation was incorporated in the original high-pressure thawing model. Excellent agreement with the experimental temperature profiles for both high-pressure freezing and thawing was observed.
Shifts in information processing level: the speed theory of intelligence revisited.
Sircar, S S
2000-06-01
A hypothesis is proposed here to reconcile the inconsistencies observed in the IQ-P3 latency relation. The hypothesis stems from the observation that task-induced increase in P3 latency correlates positively with IQ scores. It is hypothesised that: (a) there are several parallel information processing pathways of varying complexity which are associated with the generation of P3 waves of varying latencies; (b) with increasing workload, there is a shift in the 'information processing level' through progressive recruitment of more complex polysynaptic pathways with greater processing power and inhibition of the oligosynaptic pathways; (c) high-IQ subjects have a greater reserve of higher level processing pathways; (d) a given 'task-load' imposes a greater 'mental workload' in subjects with lower IQ than in those with higher IQ. According to this hypothesis, a meaningful comparison of the P3 correlates of IQ is possible only when the information processing level is pushed to its limits.
Contamination pathways of spore-forming bacteria in a vegetable cannery.
Durand, Loïc; Planchon, Stella; Guinebretiere, Marie-Hélène; André, Stéphane; Carlin, Frédéric; Remize, Fabienne
2015-06-02
Spoilage of low-acid canned food during prolonged storage at high temperatures is caused by heat resistant thermophilic spores of strict or facultative bacteria. Here, we performed a bacterial survey over two consecutive years on the processing line of a French company manufacturing canned mixed green peas and carrots. In total, 341 samples were collected, including raw vegetables, green peas and carrots at different steps of processing, cover brine, and process environment samples. Thermophilic and highly-heat-resistant thermophilic spores growing anaerobically were counted. During vegetable preparation, anaerobic spore counts were significantly decreased, and tended to remain unchanged further downstream in the process. Large variation of spore levels in products immediately before the sterilization process could be explained by occasionally high spore levels on surfaces and in debris of vegetable combined with long residence times in conditions suitable for growth and sporulation. Vegetable processing was also associated with an increase in the prevalence of highly-heat-resistant species, probably due to cross-contamination of peas via blanching water. Geobacillus stearothermophilus M13-PCR genotypic profiling on 112 isolates determined 23 profile-types and confirmed process-driven cross-contamination. Taken together, these findings clarify the scheme of contamination pathway by thermophilic spore-forming bacteria in a vegetable cannery. Copyright © 2015 Elsevier B.V. All rights reserved.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Small numbers are sensed directly, high numbers constructed from size and density.
Zimmermann, Eckart
2018-04-01
Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.
Ultra-processed foods and the limits of product reformulation.
Scrinis, Gyorgy; Monteiro, Carlos Augusto
2018-01-01
The nutritional reformulation of processed food and beverage products has been promoted as an important means of addressing the nutritional imbalances in contemporary dietary patterns. The focus of most reformulation policies is the reduction in quantities of nutrients-to-limit - Na, free sugars, SFA, trans-fatty acids and total energy. The present commentary examines the limitations of what we refer to as 'nutrients-to-limit reformulation' policies and practices, particularly when applied to ultra-processed foods and drink products. Beyond these nutrients-to-limit, there are a range of other potentially harmful processed and industrially produced ingredients used in the production of ultra-processed products that are not usually removed during reformulation. The sources of nutrients-to-limit in these products may be replaced with other highly processed ingredients and additives, rather than with whole or minimally processed foods. Reformulation policies may also legitimise current levels of consumption of ultra-processed products in high-income countries and increased levels of consumption in emerging markets in the global South.
Wang, Fan; Du, Bao-Lei; Cui, Zheng-Wei; Xu, Li-Ping; Li, Chun-Yang
2017-03-01
The aim of this study was to investigate the effects of high hydrostatic pressure and thermal processing on microbiological quality, bioactive compounds, antioxidant activity, and volatile profile of mulberry juice. High hydrostatic pressure processing at 500 MPa for 10 min reduced the total viable count from 4.38 log cfu/ml to nondetectable level and completely inactivated yeasts and molds in raw mulberry juice, ensuring the microbiological safety as thermal processing at 85 ℃ for 15 min. High hydrostatic pressure processing maintained significantly (p < 0.05) higher contents of total phenolic, total flavonoid and resveratrol, and antioxidant activity of mulberry juice than thermal processing. The main volatile compounds of mulberry juice were aldehydes, alcohols, and ketones. High hydrostatic pressure processing enhanced the volatile compound concentrations of mulberry juice while thermal processing reduced them in comparison with the control. These results suggested that high hydrostatic pressure processing could be an alternative to conventional thermal processing for production of high-quality mulberry juice.
Pacis, Efren; Yu, Marcella; Autsen, Jennifer; Bayer, Robert; Li, Feng
2011-10-01
The glycosylation profile of therapeutic antibodies is routinely analyzed throughout development to monitor the impact of process parameters and to ensure consistency, efficacy, and safety for clinical and commercial batches of therapeutic products. In this study, unusually high levels of the mannose-5 (Man5) glycoform were observed during the early development of a therapeutic antibody produced from a Chinese hamster ovary (CHO) cell line, model cell line A. Follow up studies indicated that the antibody Man5 level was increased throughout the course of cell culture production as a result of increasing cell culture medium osmolality levels and extending culture duration. With model cell line A, Man5 glycosylation increased more than twofold from 12% to 28% in the fed-batch process through a combination of high basal and feed media osmolality and increased run duration. The osmolality and culture duration effects were also observed for four other CHO antibody producing cell lines by adding NaCl in both basal and feed media and extending the culture duration of the cell culture process. Moreover, reduction of Man5 level from model cell line A was achieved by supplementing MnCl2 at appropriate concentrations. To further understand the role of glycosyltransferases in Man5 level, N-acetylglucosaminyltransferase I GnT-I mRNA levels at different osmolality conditions were measured. It has been hypothesized that specific enzyme activity in the glycosylation pathway could have been altered in this fed-batch process. Copyright © 2011 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Adu-Gyamfi, Kenneth; Ampiah, Joseph Ghartey
2016-01-01
Science education at the Basic School (Primary and Junior High School) serves as the foundation upon which higher levels of science education are pivoted. This ethnographic study sought to investigate the teaching of Integrated Science at the Junior High School (JHS) level in the classrooms of two science teachers in two schools of differing…
Theron, Chrispian W; Berrios, Julio; Delvigne, Frank; Fickers, Patrick
2018-01-01
The methylotrophic yeast Komagataella (Pichia) pastoris has become one of the most utilized cell factories for the production of recombinant proteins over the last three decades. This success story is linked to its specific physiological traits, i.e., the ability to grow at high cell density in inexpensive culture medium and to secrete proteins at high yield. Exploiting methanol metabolism is at the core of most P. pastoris-based processes but comes with its own challenges. Co-feeding cultures with glycerol/sorbitol and methanol is a promising approach, which can benefit from improved understanding and prediction of metabolic response. The development of profitable processes relies on the construction and selection of efficient producing strains from less efficient ones but also depends on the ability to master the bioreactor process itself. More specifically, how a bioreactor processes could be monitored and controlled to obtain high yield of production. In this review, new perspectives are detailed regarding a multi-faceted approach to recombinant protein production processes by P. pastoris; including gaining improved understanding of the metabolic pathways involved, accounting for variations in transcriptional and translational efficiency at the single cell level and efficient monitoring and control of methanol levels at the bioreactor level.
Cappa, Carola; Lucisano, Mara; Barbosa-Cánovas, Gustavo V; Mariotti, Manuela
2016-07-01
The impact of high pressure (HP) processing on corn starch, rice flour and waxy rice flour was investigated as a function of pressure level (400MPa; 600MPa), pressure holding time (5min; 10min), and temperature (20°C; 40°C). Samples were pre-conditioned (final moisture level: 40g/100g) before HP treatments. Both the HP treated and the untreated raw materials were evaluated for pasting properties and solvent retention capacity, and investigated by differential scanning calorimetry, X-ray diffractometry and environmental scanning electron microscopy. Different pasting behaviors and solvent retention capacities were evidenced according to the applied pressure. Corn starch presented a slower gelatinization trend when treated at 600MPa. Corn starch and rice flour treated at 600MPa showed a higher retention capacity of carbonate and lactic acid solvents, respectively. Differential scanning calorimetry and environmental scanning electron microscopy investigations highlighted that HP affected the starch structure of rice flour and corn starch. Few variations were evidenced in waxy rice flour. These results can assist in advancing the HP processing knowledge, as the possibility to successfully process raw samples in a very high sample-to-water concentration level was evidenced. This work investigates the effect of high pressure as a potential technique to modify the processing characteristics of starchy materials without using high temperature. In this case the starches were processed in the powder form - and not as a slurry as in previously reported studies - showing the flexibility of the HP treatment. The relevance for industrial application is the possibility to change the structure of flour starches, and thus modifying the processability of the mentioned products. Copyright © 2016 Elsevier Ltd. All rights reserved.
Atomic Processes for XUV Lasers: Alkali Atoms and Ions
NASA Astrophysics Data System (ADS)
Dimiduk, David Paul
The development of extreme ultraviolet (XUV) lasers is dependent upon knowledge of processes in highly excited atoms. Described here are spectroscopy experiments which have identified and characterized certain autoionizing energy levels in core-excited alkali atoms and ions. Such levels, termed quasi-metastable, have desirable characteristics as upper levels for efficient, powerful XUV lasers. Quasi -metastable levels are among the most intense emission lines in the XUV spectra of core-excited alkalis. Laser experiments utilizing these levels have proved to be useful in characterizing other core-excited levels. Three experiments to study quasi-metastable levels are reported. The first experiment is vacuum ultraviolet (VUV) absorption spectroscopy on the Cs 109 nm transitions using high-resolution laser techniques. This experiment confirms the identification of transitions to a quasi-metastable level, estimates transition oscillator strengths, and estimates the hyperfine splitting of the quasi-metastable level. The second experiment, XUV emission spectroscopy of Ca II and Sr II in a microwave-heated plasma, identifies transitions from quasi-metastable levels in these ions, and provides confirming evidence of their radiative, rather than autoionizing, character. In the third experiment, core-excited Ca II ions are produced by inner-shell photoionization of Ca with soft x-rays from a laser-produced plasma. This preliminary experiment demonstrated a method of creating large numbers of these highly-excited ions for future spectroscopic experiments. Experimental and theoretical evidence suggests the CA II 3{ rm p}^5 3d4s ^4 {rm F}^circ_{3/2 } quasi-metastable level may be directly pumped via a dipole ionization process from the Ca I ground state. The direct process is permitted by J conservation, and occurs due to configuration mixing in the final state and possibly the initial state as well. The experiments identifying and characterizing quasi-metastable levels are compared to calculations using the Hartree-Fock code RCN/RCG. Calculated parameters include energy levels, wavefunctions, and transition rates. Based on an extension of this code, earlier unexplained experiments showing strong two-electron radiative transitions from quasi-metastable levels are now understood.
NASA Astrophysics Data System (ADS)
Pitoňák, Martin; Šprlák, Michal; Hamáčková, Eliška; Novák, Pavel
2016-04-01
Regional recovery of the disturbing gravitational potential in the area of Central Europe from satellite gravitational gradients data is discussed in this contribution. The disturbing gravitational potential is obtained by inverting surface integral formulas which transform the disturbing gravitational potential onto disturbing gravitational gradients in the spherical local north-oriented frame. Two numerical approaches that solve the inverse problem are considered. In the first approach, the integral formulas are rigorously decomposed into two parts, that is, the effects of the gradient data within near and distant zones. While the effect of the near zone data is sought as an inverse problem, the effect of the distant zone data is synthesized from the global gravitational model GGM05S using spectral weights given by truncation error coefficients up to the degree 150. In the second approach, a reference gravitational field up to the degree 180 is applied to reduce and smooth measured gravitational gradients. In both cases we recovered the disturbing gravitational potential from each of the four well-measured gravitational gradients of the GOCE satellite separately as well as from their combination. Obtained results are compared with the EGM2008, DIR-r2, TIM-r2 and SPW-r2 global gravitational models. The best fit was achieved for EGM2008 and the second approach combining all four well-measured gravitational gradients with rms of 1.231 m2 s-2.
Major Fault Patterns in Zanjan State of Iran Based of GECO Global Geoid Model
NASA Astrophysics Data System (ADS)
Beheshty, Sayyed Amir Hossein; Abrari Vajari, Mohammad; Raoufikelachayeh, SeyedehSusan
2016-04-01
A new Earth Gravitational Model (GECO) to degree 2190 has been developed incorporates EGM2008 and the latest GOCE based satellite solutions. Satellite gradiometry data are more sensitive information of the long- and medium- wavelengths of the gravity field than the conventional satellite tracking data. Hence, by utilizing this new technique, more accurate, reliable and higher degrees/orders of the spherical harmonic expansion of the gravity field can be achieved. Gravity gradients can also be useful in geophysical interpretation and prospecting. We have presented the concept of gravity gradients with some simple interpretations. A MATLAB based computer programs were developed and utilized for determining the gravity and gradient components of the gravity field using the GGMs, followed by a case study in Zanjan State of Iran. Our numerical studies show strong (more than 72%) correlations between gravity anomalies and the diagonal elements of the gradient tensor. Also, strong correlations were revealed between the components of the deflection of vertical and the off-diagonal elements as well as between the horizontal gradient and magnitude of the deflection of vertical. We clearly distinguished two big faults in North and South of Zanjan city based on the current information. Also, several minor faults were detected in the study area. Therefore, the same geophysical interpretation can be stated for gravity gradient components too. Our mathematical derivations support some of these correlations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... production or activity level. (1) If the expected mix of products serves as the basis for the batch mass... from the high-level calibration gas is at least 20 times the standard deviation of the response from... 25A, 40 CFR part 60, appendix A, is acceptable if the response from the high-level calibration gas is...
Code of Federal Regulations, 2011 CFR
2011-07-01
... limitation is not dependent upon any past production or activity level. (1) If the expected mix of products... acceptable if the response from the high-level calibration gas is at least 20 times the standard deviation of..., appendix A is acceptable if the response from the high-level calibration gas is at least 20 times the...
Murphy, Steven C; Martin, Nicole H; Barbano, David M; Wiedmann, Martin
2016-12-01
This article provides an overview of the influence of raw milk quality on the quality of processed dairy products and offers a perspective on the merits of investing in quality. Dairy farmers are frequently offered monetary premium incentives to provide high-quality milk to processors. These incentives are most often based on raw milk somatic cell and bacteria count levels well below the regulatory public health-based limits. Justification for these incentive payments can be based on improved processed product quality and manufacturing efficiencies that provide the processor with a return on their investment for high-quality raw milk. In some cases, this return on investment is difficult to measure. Raw milks with high levels of somatic cells and bacteria are associated with increased enzyme activity that can result in product defects. Use of raw milk with somatic cell counts >100,000cells/mL has been shown to reduce cheese yields, and higher levels, generally >400,000 cells/mL, have been associated with textural and flavor defects in cheese and other products. Although most research indicates that fairly high total bacteria counts (>1,000,000 cfu/mL) in raw milk are needed to cause defects in most processed dairy products, receiving high-quality milk from the farm allows some flexibility for handling raw milk, which can increase efficiencies and reduce the risk of raw milk reaching bacterial levels of concern. Monitoring total bacterial numbers in regard to raw milk quality is imperative, but determining levels of specific types of bacteria present has gained increasing importance. For example, spores of certain spore-forming bacteria present in raw milk at very low levels (e.g., <1/mL) can survive pasteurization and grow in milk and cheese products to levels that result in defects. With the exception of meeting product specifications often required for milk powders, testing for specific spore-forming groups is currently not used in quality incentive programs in the United States but is used in other countries (e.g., the Netherlands). Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
Tradeoffs in the design of a system for high level language interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osorio, F.C.C.; Patt, Y.N.
The problem of designing a system for high-level language interpretation (HLLI) is considered. First, a model of the design process is presented where several styles of design, e.g. turing machine interpretation, CISC architecture interpretation and RISC architecture interpretation are treated uniformly. Second, the most significant characteristics of HLLI are analysed in the context of different design styles, and some guidelines are presented on how to identify the most suitable design style for a given high-level language problem. 12 references.
Grinding and classification of pine bark for use as plywood adhesive filler
Thomas L. Eberhardt; Karen G. Reed
2005-01-01
Prior efforts to incorporate bark or bark extracts into composites have met with only limited success because of poor performance relative to existing products and/or economic barriers stemming from high levels of processing. We are currently investigating applications for southern yellow pine (SYP) bark that require intermediate levels of processing, one being the use...
ERIC Educational Resources Information Center
Gökçen, Elif; Frederickson, Norah; Petrides, K. V.
2016-01-01
Autism spectrum disorder (ASD) is characterised by profound difficulties in empathic processing and executive control. Whilst the links between these processes have been frequently investigated in populations with autism, few studies have examined them at the subclinical level. In addition, the contribution of alexithymia, a trait characterised by…
Baldwin, Carryl L; Struckman-Johnson, David
2002-01-15
Speech displays and verbal response technologies are increasingly being used in complex, high workload environments that require the simultaneous performance of visual and manual tasks. Examples of such environments include the flight decks of modern aircraft, advanced transport telematics systems providing invehicle route guidance and navigational information and mobile communication equipment in emergency and public safety vehicles. Previous research has established an optimum range for speech intelligibility. However, the potential for variations in presentation levels within this range to affect attentional resources and cognitive processing of speech material has not been examined previously. Results of the current experimental investigation demonstrate that as presentation level increases within this 'optimum' range, participants in high workload situations make fewer sentence-processing errors and generally respond faster. Processing errors were more sensitive to changes in presentation level than were measures of reaction time. Implications of these findings are discussed in terms of their application for the design of speech communications displays in complex multi-task environments.
Techakanon, Chukwan; Gradziel, Thomas M; Zhang, Lu; Barrett, Diane M
2016-09-28
Fruit maturity is an important factor associated with final product quality, and it may have an effect on the level of browning in peaches that are high pressure processed (HPP). Peaches from three different maturities, as determined by firmness (M1 = 50-55 N, M2 = 35-40 N, and M3 = 15-20 N), were subjected to pressure levels at 0.1, 200, and 400 MPa for 10 min. The damage from HPP treatment results in loss of fruit integrity and the development of browning during storage. Increasing pressure levels of HPP treatment resulted in greater damage, particularly in the more mature peaches, as determined by shifts in transverse relaxation time (T2) of the vacuolar component and by light microscopy. The discoloration of peach slices of different maturities processed at the same pressure was comparable, indicating that the effect of pressure level is greater than that of maturity in the development of browning.
Yoon, Chiyul; Noh, Seungwoo; Lee, Jung Chan; Ko, Sung Ho; Ahn, Wonsik; Kim, Hee Chan
2014-03-01
The continuous autotransfusion system has been widely used in surgical operations. It is known that if oil is added to blood, and this mixture is then processed by an autotransfusion device, the added oil is removed and reinfusion of fat is prevented by the device. However, there is no detailed report on the influence of the particular washing program selected on the levels of blood components including blood fat after continuous autotransfusion using such a system. Fresh bovine blood samples were processed by a commercial continuous autotransfusion device using the "emergency," "quality," and "high-quality" programs, applied in random order. Complete blood count (CBC) and serum chemistry were analyzed to determine how the blood processing performance of the device changes with the washing program applied. There was no significant difference in the CBC results obtained with the three washing programs. Although all of the blood lipids in the processed blood were decreased compared to those in the blood before processing, the levels of triglyceride, phospholipid, and total cholesterol after processing via the emergency program were significantly higher than those present after processing via the quality and high-quality programs. Although the continuous autotransfusion device provided consistent hematocrit quality, the levels of some blood lipid components showed significant differences among the washing programs.
Effects of technological processes on enniatin levels in pasta.
Serrano, Ana B; Font, Guillermina; Mañes, Jordi; Ferrer, Emilia
2016-03-30
Potential human health risks posed by enniatins (ENs) require their control primarily from cereal products, creating a demand for harvesting, food processing and storage techniques capable to prevent, reduce and/or eliminate the contamination. In this study, different methodologies to pasta processing simulating traditional and industrial processes were developed in order to know the fate of the mycotoxin ENs. The levels of ENs were studied at different steps of pasta processing. The effect of the temperature during processing was evaluated in two types of pasta (white and whole-grain pasta). Mycotoxin analysis was performed by LC-MS/MS. High reductions (up to 50% and 80%) were achieved during drying pasta at 45-55°C and 70-90°C, respectively. The treatments at low temperature (25°C) did not change EN levels. The effect of pasta composition did not cause a significant effect on the stability of ENs. The effect of the temperature allowed a marked mycotoxin reduction during pasta processing. Generally, ENA1 and ENB showed higher thermal stability than did ENA and ENB1 . The findings from the present study suggested that pasta processing at medium-high temperatures is a potential tool to remove an important fraction of ENs from the initial durum wheat semolina. © 2015 Society of Chemical Industry.
Maier, Maximilian B; Lenz, Christian A; Vogel, Rudi F
2017-01-01
The effect of high pressure thermal (HPT) processing on the inactivation of spores of proteolytic type B Clostridium botulinum TMW 2.357 in four differently composed low-acid foods (green peas with ham, steamed sole, vegetable soup, braised veal) was studied in an industrially feasible pressure range and temperatures between 100 and 120°C. Inactivation curves exhibited rapid inactivation during compression and decompression followed by strong tailing effects. The highest inactivation (approx. 6-log cycle reduction) was obtained in braised veal at 600 MPa and 110°C after 300 s pressure-holding time. In general, inactivation curves exhibited similar negative exponential shapes, but maximum achievable inactivation levels were lower in foods with higher fat contents. At high treatment temperatures, spore inactivation was more effective at lower pressure levels (300 vs. 600 MPa), which indicates a non-linear pressure/temperature-dependence of the HPT spore inactivation efficiency. A comparison of spore inactivation levels achievable using HPT treatments versus a conventional heat sterilization treatment (121.1°C, 3 min) illustrates the potential of combining high pressures and temperatures to replace conventional retorting with the possibility to reduce the process temperature or shorten the processing time. Finally, experiments using varying spore inoculation levels suggested the presence of a resistant fraction comprising approximately 0.01% of a spore population as reason for the pronounced tailing effects in survivor curves. The loss of the high resistance properties upon cultivation indicates that those differences develop during sporulation and are not linked to permanent modifications at the genetic level.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
High-level waste borosilicate glass: A compendium of corrosion characteristics. Volume 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunnane, J.C.; Bates, J.K.; Bradley, C.R.
1994-03-01
The objective of this document is to summarize scientific information pertinent to evaluating the extent to which high-level waste borosilicate glass corrosion and the associated radionuclide release processes are understood for the range of environmental conditions to which waste glass may be exposed in service. Alteration processes occurring within the bulk of the glass (e.g., devitrification and radiation-induced changes) are discussed insofar as they affect glass corrosion. Volume III contains a bibliography of glass corrosion studies, including studies that are not cited in Volumes I and II.
Shi, Yanwei; Ling, Wencui; Qiang, Zhimin
2013-01-01
The effect of chlorine dioxide (ClO2) oxidation on the formation of disinfection by-products (DBPs) during sequential (ClO2 pre-oxidation for 30 min) and simultaneous disinfection processes with free chlorine (FC) or monochloramine (MCA) was investigated. The formation of DBPs from synthetic humic acid (HA) water and three natural surface waters containing low bromide levels (11-27 microg/L) was comparatively examined in the FC-based (single FC, sequential ClO2-FC, and simultaneous ClO2/FC) and MCA-based (single MCA, ClO2-MCA, and ClO2/MCA) disinfection processes. The results showed that much more DBPs were formed from the synthetic HA water than from the three natural surface waters with comparative levels of dissolved organic carbon. In the FC-based processes, ClO2 oxidation could reduce trihalomethanes (THMs) by 27-35% and haloacetic acids (HAAs) by 14-22% in the three natural surface waters, but increased THMs by 19% and HAAs by 31% in the synthetic HA water after an FC contact time of 48 h. In the MCA-based processes, similar trends were observed although DBPs were produced at a much lower level. There was an insignificant difference in DBPs formation between the sequential and simultaneous processes. The presence of a high level of bromide (320 microg/L) remarkably promoted the DBPs formation in the FC-based processes. Therefore, the simultaneous disinfection process of ClO2/MCA is recommended particularly for waters with a high bromide level.
NASA Astrophysics Data System (ADS)
Mungov, G.; Dunbar, P. K.; Stroker, K. J.; Sweeney, A.
2016-12-01
The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information is data repository for high-resolution, integrated water-level data to support tsunami research, risk assessment and mitigation to protect life and property damages along the coasts. NCEI responsibilities include, but are not limited to process, archiv and distribut and coastal water level data from different sourcesg tsunami and storm-surge inundation, sea-level change, climate variability, etc. High-resolution data for global historical tsunami events are collected by the Deep-ocean Assessment and Reporting of Tsunami (DART®) tsunameter network maintained by NOAA's National Data Buoy Center NDBC, coastal tide-gauges maintained by NOAA's Center for Operational Oceanographic Products and Services (CO-OPS) and Tsunami Warning Centers, historic marigrams and images, bathymetric data, and from other national and international sources. NCEI-CO water level database is developed in close collaboration with all data providers along with NOAA's Pacific Marine Environmental Laboratory. We outline here the present state in water-level data processing regarding the increasing needs for high-precision, homogeneous and "clean" tsunami records from data different sources and different sampling interval. Two tidal models are compared: the Mike Foreman's improved oceanographic model (2009) and the Akaike Bayesian Information Criterion approach applied by Tamura et al. (1991). The effects of filtering and the limits of its application are also discussed along with the used method for de-spiking the raw time series.
Raizes, Meytal; Elkana, Odelia; Franko, Motty; Ravona Springer, Ramit; Segev, Shlomo; Beeri, Michal Schnaider
2016-01-01
We explored the association of plasma glucose levels within the normal range with processing speed in high functioning young elderly, free of type 2 diabetes mellitus (T2DM). A sample of 41 participants (mean age = 64.7, SD = 10; glucose 94.5 mg/dL, SD = 9.3), were examined with a computerized cognitive battery. Hierarchical linear regression analysis showed that higher plasma glucose levels, albeit within the normal range (<110 mg/dL), were associated with longer reaction times (p < 0.01). These findings suggest that even in the subclinical range and in the absence of T2DM, monitoring plasma glucose levels may have an impact on cognitive function.
Ndidi, Uche Samuel; Ndidi, Charity Unekwuojo; Olagunju, Abbas; Muhammad, Aliyu; Billy, Francis Graham; Okpe, Oche
2014-01-01
This research was aimed at evaluating the proximate composition, level of anti-nutrients, and the mineral composition of raw and processed Sphenostylis stenocarpa seeds and at examining the effect of processing on the parameters. From the proximate composition analysis, the ash content showed no significant difference (P > 0.05) between the processed and unprocessed (raw) samples. However, there was significant difference (P < 0.05) in the levels of moisture, crude lipid, nitrogen-free extract, gross energy, true protein, and crude fiber between the processed and unprocessed S. stenocarpa. Analyses of the antinutrient composition show that the processed S. stenocarpa registered significant reduction in levels of hydrogen cyanide, trypsin inhibitor, phytate, oxalate, and tannins compared to the unprocessed. Evaluation of the mineral composition showed that the level of sodium, calcium, and potassium was high in both the processed and unprocessed sample (150–400 mg/100 g). However, the level of iron, copper, zinc, and magnesium was low in both processed and unprocessed samples (2–45 mg/100 g). The correlation analysis showed that tannins and oxalate affected the levels of ash and nitrogen-free extract of processed and unprocessed seeds. These results suggest that the consumption of S. stenocarpa will go a long way in reducing the level of malnutrition in northern Nigeria. PMID:24967265
Colossal Tooling Design: 3D Simulation for Ergonomic Analysis
NASA Technical Reports Server (NTRS)
Hunter, Steve L.; Dischinger, Charles; Thomas, Robert E.; Babai, Majid
2003-01-01
The application of high-level 3D simulation software to the design phase of colossal mandrel tooling for composite aerospace fuel tanks was accomplished to discover and resolve safety and human engineering problems. The analyses were conducted to determine safety, ergonomic and human engineering aspects of the disassembly process of the fuel tank composite shell mandrel. Three-dimensional graphics high-level software, incorporating various ergonomic analysis algorithms, was utilized to determine if the process was within safety and health boundaries for the workers carrying out these tasks. In addition, the graphical software was extremely helpful in the identification of material handling equipment and devices for the mandrel tooling assembly/disassembly process.
Rapid Disaster Damage Estimation
NASA Astrophysics Data System (ADS)
Vu, T. T.
2012-07-01
The experiences from recent disaster events showed that detailed information derived from high-resolution satellite images could accommodate the requirements from damage analysts and disaster management practitioners. Richer information contained in such high-resolution images, however, increases the complexity of image analysis. As a result, few image analysis solutions can be practically used under time pressure in the context of post-disaster and emergency responses. To fill the gap in employment of remote sensing in disaster response, this research develops a rapid high-resolution satellite mapping solution built upon a dual-scale contextual framework to support damage estimation after a catastrophe. The target objects are building (or building blocks) and their condition. On the coarse processing level, statistical region merging deployed to group pixels into a number of coarse clusters. Based on majority rule of vegetation index, water and shadow index, it is possible to eliminate the irrelevant clusters. The remaining clusters likely consist of building structures and others. On the fine processing level details, within each considering clusters, smaller objects are formed using morphological analysis. Numerous indicators including spectral, textural and shape indices are computed to be used in a rule-based object classification. Computation time of raster-based analysis highly depends on the image size or number of processed pixels in order words. Breaking into 2 level processing helps to reduce the processed number of pixels and the redundancy of processing irrelevant information. In addition, it allows a data- and tasks- based parallel implementation. The performance is demonstrated with QuickBird images captured a disaster-affected area of Phanga, Thailand by the 2004 Indian Ocean tsunami are used for demonstration of the performance. The developed solution will be implemented in different platforms as well as a web processing service for operational uses.
Conducting Original Research at the High School Level--the Students' Perspective.
ERIC Educational Resources Information Center
Scott, Marcus; VanNoord, Greg
1996-01-01
High school students discuss the process of conducting original scientific research in a high school biology course, including developing an idea, obtaining financial support, collecting data, and presenting findings. (MKR)
Cheng, Rebecca Wing-yi; Lam, Shui-fong; Chan, Joanne Chung-yan
2008-06-01
There has been an ongoing debate about the inconsistent effects of heterogeneous ability grouping on students in small group work such as project-based learning. The present research investigated the roles of group heterogeneity and processes in project-based learning. At the student level, we examined the interaction effect between students' within-group achievement and group processes on their self- and collective efficacy. At the group level, we examined how group heterogeneity was associated with the average self- and collective efficacy reported by the groups. The participants were 1,921 Hong Kong secondary students in 367 project-based learning groups. Student achievement was determined by school examination marks. Group processes, self-efficacy and collective efficacy were measured by a student-report questionnaire. Hierarchical linear modelling was used to analyse the nested data. When individual students in each group were taken as the unit of analysis, results indicated an interaction effect of group processes and students' within-group achievement on the discrepancy between collective- and self-efficacy. When compared with low achievers, high achievers reported lower collective efficacy than self-efficacy when group processes were of low quality. However, both low and high achievers reported higher collective efficacy than self-efficacy when group processes were of high quality. With 367 groups taken as the unit of analysis, the results showed that group heterogeneity, group gender composition and group size were not related to the discrepancy between collective- and self-efficacy reported by the students. Group heterogeneity was not a determinant factor in students' learning efficacy. Instead, the quality of group processes played a pivotal role because both high and low achievers were able to benefit when group processes were of high quality.
Mendez, Michelle A
2015-01-01
Background: “Processed foods” are defined as any foods other than raw agricultural commodities and can be categorized by the extent of changes occurring in foods as a result of processing. Conclusions about the association between the degree of food processing and nutritional quality are discrepant. Objective: We aimed to determine 2000–2012 trends in the contribution of processed and convenience food categories to purchases by US households and to compare saturated fat, sugar, and sodium content of purchases across levels of processing and convenience. Design: We analyzed purchases of consumer packaged goods for 157,142 households from the 2000–2012 Homescan Panel. We explicitly defined categories for classifying products by degree of industrial processing and separately by convenience of preparation. We classified >1.2 million products through use of barcode-specific descriptions and ingredient lists. Median saturated fat, sugar, and sodium content and the likelihood that purchases exceeded maximum daily intake recommendations for these components were compared across levels of processing or convenience by using quantile and logistic regression. Results: More than three-fourths of energy in purchases by US households came from moderately (15.9%) and highly processed (61.0%) foods and beverages in 2012 (939 kcal/d per capita). Trends between 2000 and 2012 were stable. When classifying foods by convenience, ready-to-eat (68.1%) and ready-to-heat (15.2%) products supplied the majority of energy in purchases. The adjusted proportion of household-level food purchases exceeding 10% kcal from saturated fat, 15% kcal from sugar, and 2400 mg sodium/2000 kcal simultaneously was significantly higher for highly processed (60.4%) and ready-to-eat (27.1%) food purchases than for purchases of less-processed foods (5.6%) or foods requiring cooking/preparation (4.9%). Conclusions: Highly processed food purchases are a dominant, unshifting part of US purchasing patterns, but highly processed foods may have higher saturated fat, sugar, and sodium content than less-processed foods. Wide variation in nutrient content suggests food choices within categories may be important. PMID:25948666
Poti, Jennifer M; Mendez, Michelle A; Ng, Shu Wen; Popkin, Barry M
2015-06-01
"Processed foods" are defined as any foods other than raw agricultural commodities and can be categorized by the extent of changes occurring in foods as a result of processing. Conclusions about the association between the degree of food processing and nutritional quality are discrepant. We aimed to determine 2000-2012 trends in the contribution of processed and convenience food categories to purchases by US households and to compare saturated fat, sugar, and sodium content of purchases across levels of processing and convenience. We analyzed purchases of consumer packaged goods for 157,142 households from the 2000-2012 Homescan Panel. We explicitly defined categories for classifying products by degree of industrial processing and separately by convenience of preparation. We classified >1.2 million products through use of barcode-specific descriptions and ingredient lists. Median saturated fat, sugar, and sodium content and the likelihood that purchases exceeded maximum daily intake recommendations for these components were compared across levels of processing or convenience by using quantile and logistic regression. More than three-fourths of energy in purchases by US households came from moderately (15.9%) and highly processed (61.0%) foods and beverages in 2012 (939 kcal/d per capita). Trends between 2000 and 2012 were stable. When classifying foods by convenience, ready-to-eat (68.1%) and ready-to-heat (15.2%) products supplied the majority of energy in purchases. The adjusted proportion of household-level food purchases exceeding 10% kcal from saturated fat, 15% kcal from sugar, and 2400 mg sodium/2000 kcal simultaneously was significantly higher for highly processed (60.4%) and ready-to-eat (27.1%) food purchases than for purchases of less-processed foods (5.6%) or foods requiring cooking/preparation (4.9%). Highly processed food purchases are a dominant, unshifting part of US purchasing patterns, but highly processed foods may have higher saturated fat, sugar, and sodium content than less-processed foods. Wide variation in nutrient content suggests food choices within categories may be important. © 2015 American Society for Nutrition.
Sea level oscillations over minute timescales: a global perspective
NASA Astrophysics Data System (ADS)
Vilibic, Ivica; Sepic, Jadranka
2016-04-01
Sea level oscillations occurring over minutes to a few hours are an important contributor to sea level extremes, and a knowledge on their behaviour is essential for proper quantification of coastal marine hazards. Tsunamis, meteotsunamis, infra-gravity waves and harbour oscillations may even dominate sea level extremes in certain areas and thus pose a great danger for humans and coastal infrastructure. Aside for tsunamis, which are, due to their enormous impact to the coastlines, a well-researched phenomena, the importance of other high-frequency oscillations to the sea level extremes is still underrated, as no systematic long-term measurements have been carried out at a minute timescales. Recently, Intergovernmental Oceanographic Commission (IOC) established Sea Level Monitoring Facility portal (http://www.ioc-sealevelmonitoring.org), making 1-min sea level data publicly available for several hundred tide gauge sites in the World Ocean. Thereafter, a global assessment of oscillations over tsunami timescales become possible; however, the portal contains raw sea level data only, being unchecked for spikes, shifts, drifts and other malfunctions of instruments. We present a quality assessment of these data, estimates of sea level variances and contributions of high-frequency processes to the extremes throughout the World Ocean. This is accompanied with assessment of atmospheric conditions and processes which generate intense high-frequency oscillations.
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquière, Pol
2007-04-09
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first grade literacy achievement children were categorized in groups and pre-school data were retrospectively reanalyzed. On average, children showing both increased family risk and literacy-impairment at the end of first grade, presented significant pre-school deficits in phonological awareness, rapid automatized naming, speech-in-noise perception and frequency modulation detection. The concurrent presence of these deficits before receiving any formal reading instruction, might suggest a causal relation with problematic literacy development. However, a closer inspection of the individual data indicates that the core of the literacy problem is situated at the level of higher-order phonological processing. Although auditory and speech perception problems are relatively over-represented in literacy-impaired subjects and might possibly aggravate the phonological and literacy problem, it is unlikely that they would be at the basis of these problems. At a neurobiological level, results are interpreted as evidence for dysfunctional processing along the auditory-to-articulation stream that is implied in phonological processing, in combination with a relatively intact or inconsistently impaired functioning of the auditory-to-meaning stream that subserves auditory processing and speech perception.
Status and Perspectives of Electric Propulsion in Italy
NASA Astrophysics Data System (ADS)
Svelto, F.; Marcuccio, S.; Matticari, G.
2002-01-01
Electric Propulsion (EP) is recognized as one of today's enabling technologies for scientific and commercial missions. In consideration of EP's major strategic impact on the near and long term scenarios, an EP development programme has been established within the Italian Space Agency (ASI), aimed at the development of a variety of propulsion capabilities covering different fields of application. This paper presents an overview of Electric Propulsion (EP) activities underway in Italy and outlines the planned development lines, both in research institutions and in industry. Italian EP activities are essentially concentrated in Pisa, at Centrospazio and Alta, and in Florence, at LABEN - Proel Tecnologie Division (LABEN/Proel). Centrospazio/Alta and LABEN/Proel have established a collaboration program for joint advanced developments in the EP field. Established in 1989, Centrospazio is a private research center closely related to the Department of Aerospace Engineering of Pisa University. Along the years, Centrospazio lines of development have included arcjets, magneto- plasma-dynamic thrusters, FEEP and Hall thrusters, as well as computational plasma dynamics and low-thrust mission studies. Alta, a small enterprise, was founded in 1999 to exploit in an industrial setting the results of research previously carried out at Centrospazio. Alta's activities include the development of micronewton and millinewton FEEP thrusters, and testing of high power Hall and ion thrusters in specialised facilities. A full micronewton FEEP propulsion system is being developed for the Microscope spacecraft, a scientific mission by CNES aimed at verification of the Equivalence Principle. FEEP will also fly on ASI's HypSEO, a technological demonstrator for Earth Observation, and is being considered for ESA's GOCE (geodesy) and SMART-2 (formation flying), as well as for the intended scientific spacecraft GG by ASI. The ASI-funded STEPS facility will be placed on an external site on the International Space Station to work as a long-duration testbed for EP systems. ASI co-funds the development of a very large testing facility (5.7 m internal diameter) for high power EP testing up to 50 kW. Proel Tecnologie is a Hi-Tech Organization established in 1986, operating in the field of electron (EGA for the TSS- 1and TSS-1R missions), ion and plasma sources for space applications. The Company, become a Division of LABEN S.p.A. (FINMECCANICA Company co-ordinated by Alenia Spazio) since 1995, has identified in the EP the main strategic development line. LABEN/Proel activities include the development of an Ion Thruster in the millinewton range (RMT, ASI technology contract), cathodes/neutralizers for EP in the 0,2-5 kW power range, in- flight diagnostics of EP sub-systems (ARTEMIS, STENTOR, SMART-1), xenon feedlines and flow control units, plasma contactors for the electrostatic charge control on spacecrafts (PLEGPAY experiment on the ISS) and support technologies/facilities for the manufacturing of Hall Thrusters and propellant tanks (the latter by using an advanced process for composite materials polymerization through electron beam irradiation). ASI considers EP development as a National priority and various technology activities are under evaluation. In this context, the Agency is playing a continued role in the process of exploitation of Italian experience and capability and in harmonisation with European efforts in the field.
Active vision in satellite scene analysis
NASA Technical Reports Server (NTRS)
Naillon, Martine
1994-01-01
In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.
NASA Astrophysics Data System (ADS)
Maciel, M. J.; Costa, C. G.; Silva, M. F.; Gonçalves, S. B.; Peixoto, A. C.; Ribeiro, A. Fernando; Wolffenbuttel, R. F.; Correia, J. H.
2016-08-01
This paper reports on the development of a technology for the wafer-level fabrication of an optical Michelson interferometer, which is an essential component in a micro opto-electromechanical system (MOEMS) for a miniaturized optical coherence tomography (OCT) system. The MOEMS consists on a titanium dioxide/silicon dioxide dielectric beam splitter and chromium/gold micro-mirrors. These optical components are deposited on 45° tilted surfaces to allow the horizontal/vertical separation of the incident beam in the final micro-integrated system. The fabrication process consists of 45° saw dicing of a glass substrate and the subsequent deposition of dielectric multilayers and metal layers. The 45° saw dicing is fully characterized in this paper, which also includes an analysis of the roughness. The optimum process results in surfaces with a roughness of 19.76 nm (rms). The actual saw dicing process for a high-quality final surface results as a compromise between the dicing blade’s grit size (#1200) and the cutting speed (0.3 mm s-1). The proposed wafer-level fabrication allows rapid and low-cost processing, high compactness and the possibility of wafer-level alignment/assembly with other optical micro components for OCT integrated imaging.
ERIC Educational Resources Information Center
Hayden, Howard C.
1995-01-01
Presents a method to calculate the amount of high-level radioactive waste by taking into consideration the following factors: the fission process that yields the waste, identification of the waste, the energy required to run a 1-GWe plant for one year, and the uranium mass required to produce that energy. Briefly discusses waste disposal and…
Student Motivations as Predictors of High-Level Cognitions in Project-Based Classrooms
ERIC Educational Resources Information Center
Stolk, Jonathan; Harari, Janie
2014-01-01
It is well established that active learning helps students engage in high-level thinking strategies and develop improved cognitive skills. Motivation and self-regulated learning research, however, illustrates that cognitive engagement is an effortful process that is related to students' valuing of the learning tasks, adoption of internalized goal…
Driving Objectives and High-level Requirements for KP-Lab Technologies
ERIC Educational Resources Information Center
Lakkala, Minna; Paavola, Sami; Toikka, Seppo; Bauters, Merja; Markannen, Hannu; de Groot, Reuma; Ben Ami, Zvi; Baurens, Benoit; Jadin, Tanja; Richter, Christoph; Zoserl, Eva; Batatia, Hadj; Paralic, Jan; Babic, Frantisek; Damsa, Crina; Sins, Patrick; Moen, Anne; Norenes, Svein Olav; Bugnon, Alexandra; Karlgren, Klas; Kotzinons, Dimitris
2008-01-01
One of the central goals of the KP-Lab project is to co-design pedagogical methods and technologies for knowledge creation and practice transformation in an integrative and reciprocal manner. In order to facilitate this process user tasks, driving objectives and high-level requirements have been introduced as conceptual tools to mediate between…
Definitely maybe: can unconscious processes perform the same functions as conscious processes?
Hesselmann, Guido; Moors, Pieter
2015-01-01
Hassin recently proposed the “Yes It Can” (YIC) principle to describe the division of labor between conscious and unconscious processes in human cognition. According to this principle, unconscious processes can carry out every fundamental high-level cognitive function that conscious processes can perform. In our commentary, we argue that the author presents an overly idealized review of the literature in support of the YIC principle. Furthermore, we point out that the dissimilar trends observed in social and cognitive psychology, with respect to published evidence of strong unconscious effects, can better be explained by the way how awareness is defined and measured in both research fields. Finally, we show that the experimental paradigm chosen by Hassin to rule out remaining objections against the YIC principle is unsuited to verify the new default notion that all high-level cognitive functions can unfold unconsciously. PMID:25999896
NASA Astrophysics Data System (ADS)
Sengupta, Pranesh; Kaushik, C. P.; Kale, G. B.; Das, D.; Raj, K.; Sharma, B. P.
2009-08-01
Understanding the material behaviour under service conditions is essential to enhance the life span of alloy 690 process pot used in vitrification of high-level nuclear waste. During vitrification process, interaction of alloy 690 with borosilicate melt takes place for substantial time period. Present experimental studies show that such interactions may result in Cr carbide precipitation along grain boundaries, Cr depletion in austenitic matrix and intergranular attack close to alloy 690/borosilicate melt pool interfaces. Widths of Cr depleted zone within alloy 690 is found to follow kinetics of the type x = 10.9 × 10 -6 + 1 × 10 -8t1/2 m. Based on the experimental results it is recommended that compositional modification of alloy 690 process pot adjacent to borosilicate melt pool need to be considered seriously for any efforts towards reduction and/or prevention of process pot failures.
Behavior of radioactive iodine and technetium in the spray calcination of high-level waste
NASA Astrophysics Data System (ADS)
Knox, C. A.; Farnsworth, R. K.
1981-08-01
The Remote Laboratory-Scale Waste Treatment Facility (RLSWTF) was designed and built as a part of the High-Level Waste Immobilization Program (now the High-Level Waste Process Development Program) at the Pacific Northwest Laboratory. In facility, installed in a radiochemical cell, is described in which installed in a radiochemical cell is described in which small volumes of radioactive liquid wastes can be solidified, the process off gas can be analyzed, and the methods for decontaminating this off gas can be tested. During the spray calcination of commercial high-level liquid waste spiked with Tc-99 and I-131 and 31 wt% loss of I-131 past the sintered-metal filters. These filters and venturi scrubber were very efficient in removing particulates and Tc-99 from the the off-gas stream. Liquid scrubbers were not efficient in removing I-131 as 25% of the total lost went to the building off-gas system. Therefore, solid adsorbents are needed to remove iodine. For all future operations where iodine is present, a silver zeolite adsorber is to be used.
The hows and whys of face memory: level of construal influences the recognition of human faces
Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean
2015-01-01
Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
NASA Astrophysics Data System (ADS)
Cox, M. A.; Reed, R.; Mellado, B.
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.
Plasma technologies application for building materials surface modification
NASA Astrophysics Data System (ADS)
Volokitin, G. G.; Skripnikova, N. K.; Volokitin, O. G.; Shehovtzov, V. V.; Luchkin, A. G.; Kashapov, N. F.
2016-01-01
Low temperature arc plasma was used to process building surface materials, such as silicate brick, sand lime brick, concrete and wood. It was shown that building surface materials modification with low temperature plasma positively affects frost resistance, water permeability and chemical resistance with high adhesion strength. Short time plasma processing is rather economical than traditional processing thermic methods. Plasma processing makes wood surface uniquely waterproof and gives high operational properties, dimensional and geometrical stability. It also increases compression resistance and decreases inner tensions level in material.
Embedded Implementation of VHR Satellite Image Segmentation
Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan
2016-01-01
Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370
Memory Scanning, Introversion-Extraversion, and Levels of Processing.
ERIC Educational Resources Information Center
Eysenck, Michael W.; Eysenck, M. Christine
1979-01-01
Investigated was the hypothesis that high arousal increases processing of physical characteristics and reduces processing of semantic characteristics. While introverts and extroverts had equivalent scanning rates for physical features, introverts were significantly slower in searching for semantic features of category membership, indicating…
Hohlfeld, Annette; Martín-Loeches, Manuel; Sommer, Werner
2015-01-01
The present study contributes to the discussion on the automaticity of semantic processing. Whereas most previous research investigated semantic processing at word level, the present study addressed semantic processing during sentence reading. A dual task paradigm was combined with the recording of event-related brain potentials. Previous research at word level processing reported different patterns of interference with the N400 by additional tasks: attenuation of amplitude or delay of latency. In the present study, we presented Spanish sentences that were semantically correct or contained a semantic violation in a critical word. At different intervals preceding the critical word a tone was presented that required a high-priority choice response. At short intervals/high temporal overlap between the tasks mean amplitude of the N400 was reduced relative to long intervals/low temporal overlap, but there were no shifts of peak latency. We propose that processing at sentence level exerts a protective effect against the additional task. This is in accord with the attentional sensitization model (Kiefer & Martens, 2010), which suggests that semantic processing is an automatic process that can be enhanced by the currently activated task set. The present experimental sentences also induced a P600, which is taken as an index of integrative processing. Additional task effects are comparable to those in the N400 time window and are briefly discussed. PMID:26203312
Simulating neutron star mergers as r-process sources in ultrafaint dwarf galaxies
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Scannapieco, Evan
2017-10-01
To explain the high observed abundances of r-process elements in local ultrafaint dwarf (UFD) galaxies, we perform cosmological zoom simulations that include r-process production from neutron star mergers (NSMs). We model star formation stochastically and simulate two different haloes with total masses ≈108 M⊙ at z = 6. We find that the final distribution of [Eu/H] versus [Fe/H] is relatively insensitive to the energy by which the r-process material is ejected into the interstellar medium, but strongly sensitive to the environment in which the NSM event occurs. In one halo, the NSM event takes place at the centre of the stellar distribution, leading to high levels of r-process enrichment such as seen in a local UFD, Reticulum II (Ret II). In a second halo, the NSM event takes place outside of the densest part of the galaxy, leading to a more extended r-process distribution. The subsequent star formation occurs in an interstellar medium with shallow levels of r-process enrichment that results in stars with low levels of [Eu/H] compared to Ret II stars even when the maximum possible r-process mass is assumed to be ejected. This suggests that the natal kicks of neutron stars may also play an important role in determining the r-process abundances in UFD galaxies, a topic that warrants further theoretical investigation.
Modeling Neutron stars as r-process sources in Ultra Faint Dwarf galaxies
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Scannapieco, Evan
2018-06-01
To explain the high observed abundances of r-process elements in local ultrafaint dwarf (UFD) galaxies, we perform cosmological zoom simulations that include r-process production from neutron star mergers (NSMs). We model star formation stochastically and simulate two different haloes with total masses ≈108 M⊙ at z = 6. We find that the final distribution of [Eu/H] versus [Fe/H] is relatively insensitive to the energy by which the r-process material is ejected into the interstellar medium, but strongly sensitive to the environment in which the NSM event occurs. In one halo, the NSM event takes place at the centre of the stellar distribution, leading to high levels of r-process enrichment such as seen in a local UFD, Reticulum II (Ret II). In a second halo, the NSM event takes place outside of the densest part of the galaxy, leading to a more extended r-process distribution. The subsequent star formation occurs in an interstellar medium with shallow levels of r-process enrichment that results in stars with low levels of [Eu/H] compared to Ret II stars even when the maximum possible r-process mass is assumed to be ejected. This suggests that the natal kicks of neutron stars may also play an important role in determining the r-process abundances in UFD galaxies, a topic that warrants further theoretical investigation.
Flank wear analysing of high speed end milling for hardened steel D2 using Taguchi Method
NASA Astrophysics Data System (ADS)
Hazza Faizi Al-Hazza, Muataz; Ibrahim, Nur Asmawiyah bt; Adesta, Erry T. Y.; Khan, Ahsan Ali; Abdullah Sidek, Atiah Bt.
2017-03-01
One of the main challenges for any manufacturer is how to decrease the machining cost without affecting the final quality of the product. One of the new advanced machining processes in industry is the high speed hard end milling process that merges three advanced machining processes: high speed milling, hard milling and dry milling. However, one of the most important challenges in this process is to control the flank wear rate. Therefore a analyzing the flank wear rate during machining should be investigated in order to determine the best cutting levels that will not affect the final quality of the product. In this research Taguchi method has been used to investigate the effect of cutting speed, feed rate and depth of cut and determine the best level s to minimize the flank wear rate up to total length of 0.3mm based on the ISO standard to maintain the finishing requirements.
Difference among Levels of Inquiry: Process Skills Improvement at Senior High School in Indonesia
ERIC Educational Resources Information Center
Hardianti, Tuti; Kuswanto, Heru
2017-01-01
The objective of the research concerned here was to discover the difference in effectiveness among Levels 2, 3, and 4 of inquiry learning in improving students' process skills. The research was a quasi-experimental study using the pretest-posttest non-equivalent control group research design. Three sample groups were selected by means of cluster…
ERIC Educational Resources Information Center
Crowe, Jacquelyn
This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…
ERIC Educational Resources Information Center
Liu, Yanni; Cherkassky, Vladimir L.; Minshew, Nancy J.; Just, Marcel Adam
2011-01-01
Previous behavioral studies have shown that individuals with autism are less hindered by interference from global processing during the performance of lower-level perceptual tasks, such as finding embedded figures. The primary goal of this study was to examine the brain manifestation of such atypicality in high-functioning autism using fMRI.…
Establishment of a high accuracy geoid correction model and geodata edge match
NASA Astrophysics Data System (ADS)
Xi, Ruifeng
This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.
Raffety, B D; Smith, R E; Ptacek, J T
1997-04-01
Participants completed anxiety and coping diaries during 10 periods that began 7 days before an academic stressor and continued through the evening after the stressor. Profile analysis was used to examine the anxiety and coping processes in relation to 2 trait anxiety grouping variables: debilitating and facilitating test anxiety (D-TA and F-TA). Anxiety and coping changed over time, and high and low levels of D-TA and F-TA were associated with different daily patterns of anxiety and coping. Participants with a debilitative, as opposed to facilitative, trait anxiety style had lower examination scores, higher anxiety, and less problem-solving coping. Covarying F-TA, high D-TA was associated with a pattern of higher levels of tension, worry, distraction, and avoidant coping, as well as lower levels of proactive coping. Covarying D-TA, high F-TA was associated with higher levels of tension (but not worry or distraction), support seeking, proactive and problem-solving coping.
Russell, Brian; Yang, Yanhong; Handlogten, Michael; Hudak, Suzanne; Cao, Mingyan; Wang, Jihong; Robbins, David; Ahuja, Sanjeev; Zhu, Min
2017-01-01
ABSTRACT Antibody disulfide bond reduction during monoclonal antibody (mAb) production is a phenomenon that has been attributed to the reducing enzymes from CHO cells acting on the mAb during the harvest process. However, the impact of antibody reduction on the downstream purification process has not been studied. During the production of an IgG2 mAb, antibody reduction was observed in the harvested cell culture fluid (HCCF), resulting in high fragment levels. In addition, aggregate levels increased during the low pH treatment step in the purification process. A correlation between the level of free thiol in the HCCF (as a result of antibody reduction) and aggregation during the low pH step was established, wherein higher levels of free thiol in the starting sample resulted in increased levels of aggregates during low pH treatment. The elevated levels of free thiol were not reduced over the course of purification, resulting in carry‐over of high free thiol content into the formulated drug substance. When the drug substance with high free thiols was monitored for product degradation at room temperature and 2–8°C, faster rates of aggregation were observed compared to the drug substance generated from HCCF that was purified immediately after harvest. Further, when antibody reduction mitigations (e.g., chilling, aeration, and addition of cystine) were applied, HCCF could be held for an extended period of time while providing the same product quality/stability as material that had been purified immediately after harvest. Biotechnol. Bioeng. 2017;114: 1264–1274. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals Inc. PMID:28186329
10 CFR 72.158 - Control of special processes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE Quality..., and applicant for a CoC shall establish measures to ensure that special processes, including welding...
10 CFR 72.158 - Control of special processes.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE Quality..., and applicant for a CoC shall establish measures to ensure that special processes, including welding...
10 CFR 72.158 - Control of special processes.
Code of Federal Regulations, 2012 CFR
2012-01-01
... NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE Quality..., and applicant for a CoC shall establish measures to ensure that special processes, including welding...
10 CFR 72.158 - Control of special processes.
Code of Federal Regulations, 2014 CFR
2014-01-01
... NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE Quality..., and applicant for a CoC shall establish measures to ensure that special processes, including welding...
[Surveillance cultures after high-level disinfection of flexible endoscopes in a general hospital].
Robles, Christian; Turín, Christie; Villar, Alicia; Huerta-Mercado, Jorge; Samalvides, Frine
2014-04-01
Flexible endoscopes are instruments with a complex structure which are used in invasive gastroenterological procedures, therefore high-level disinfection (HLD) is recommended as an appropriate reprocessing method. However, most hospitals do not perform a quality control to assess the compliance and results of the disinfection process. To evaluate the effectiveness of the flexible endoscopes’ decontamination after high-level disinfection by surveillance cultures and to assess the compliance with the reprocessing guidelines. Descriptive study conducted in January 2013 in the Gastroenterological Unit of a tertiary hospital. 30 endoscopic procedures were randomly selected. Compliance with guidelines was evaluated and surveillance cultures for common bacteria were performed after the disinfection process. On the observational assessment, compliance with the guidelines was as follows: pre-cleaning 9 (30%), cleaning 5 (16.7%), rinse 3 (10%), first drying 30 (100%), disinfection 30 (100%), final rinse 0 (0%) and final drying 30 (100%), demonstrating that only 3 of 7 stages of the disinfection process were optimally performed. In the microbiological evaluation, 2 (6.7%) of the 30 procedures had a positive culture obtained from the surface of the endoscope. Furthermore, 1 (4.2%) of the 24 biopsy forcepsgave a positive culture. The organisms isolated were different Pseudomonas species. High-level disinfection procedures were not optimally performed, finding in 6.7% positive cultures of Pseudomonas species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mee, V.V.D.; Meelker, H.; Schelde, R.V.D.
1999-01-01
In this investigation, an attempt is made to further the understanding of factors influencing the hydrogen content in duplex stainless steel gas tungsten arc (GTA) and gas metal arc (GMA) welds as well as to what extent it affects hydrogen-induced cracking susceptibility. The results indicated that susceptibility to hydrogen cracking using the GTA or GMA process appears to be limited. In practice, maintaining a moisture level below 10 ppm in the shielding gas is of less importance than the choice of welding parameters. Even a moisture level of 1000 ppm in the shielding gas, in combination with the correct weldingmore » parameters, will result in a sufficient low hydrogen content in the weld. Similarly, a moisture level in the shielding gas below 10 ppm does not necessarily result in low hydrogen content in the weld metal. Although very high ferrite levels were combined with high restrain and high hydrogen content, none of the GMA and GTA welds cracked. Susceptibility to hydrogen cracking is concluded to be limited.« less
Satellite observations of middle atmosphere-thermosphere vertical coupling by gravity waves
NASA Astrophysics Data System (ADS)
Trinh, Quang Thai; Ern, Manfred; Doornbos, Eelco; Preusse, Peter; Riese, Martin
2018-03-01
Atmospheric gravity waves (GWs) are essential for the dynamics of the middle atmosphere. Recent studies have shown that these waves are also important for the thermosphere/ionosphere (T/I) system. Via vertical coupling, GWs can significantly influence the mean state of the T/I system. However, the penetration of GWs into the T/I system is not fully understood in modeling as well as observations. In the current study, we analyze the correlation between GW momentum fluxes observed in the middle atmosphere (30-90 km) and GW-induced perturbations in the T/I. In the middle atmosphere, GW momentum fluxes are derived from temperature observations of the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) satellite instrument. In the T/I, GW-induced perturbations are derived from neutral density measured by instruments on the Gravity field and Ocean Circulation Explorer (GOCE) and CHAllenging Minisatellite Payload (CHAMP) satellites. We find generally positive correlations between horizontal distributions at low altitudes (i.e., below 90 km) and horizontal distributions of GW-induced density fluctuations in the T/I (at 200 km and above). Two coupling mechanisms are likely responsible for these positive correlations: (1) fast GWs generated in the troposphere and lower stratosphere can propagate directly to the T/I and (2) primary GWs with their origins in the lower atmosphere dissipate while propagating upwards and generate secondary GWs, which then penetrate up to the T/I and maintain the spatial patterns of GW distributions in the lower atmosphere. The mountain-wave related hotspot over the Andes and Antarctic Peninsula is found clearly in observations of all instruments used in our analysis. Latitude-longitude variations in the summer midlatitudes are also found in observations of all instruments. These variations and strong positive correlations in the summer midlatitudes suggest that GWs with origins related to convection also propagate up to the T/I. Different processes which likely influence the vertical coupling are GW dissipation, possible generation of secondary GWs, and horizontal propagation of GWs. Limitations of the observations as well as of our research approach are discussed.
Torsional ultrasonic wave based level measurement system
Holcomb, David E [Oak Ridge, TN; Kisner, Roger A [Knoxville, TN
2012-07-10
A level measurement system suitable for use in a high temperature and pressure environment to measure the level of coolant fluid within the environment, the system including a volume of coolant fluid located in a coolant region of the high temperature and pressure environment and having a level therein; an ultrasonic waveguide blade that is positioned within the desired coolant region of the high temperature and pressure environment; a magnetostrictive electrical assembly located within the high temperature and pressure environment and configured to operate in the environment and cooperate with the waveguide blade to launch and receive ultrasonic waves; and an external signal processing system located outside of the high temperature and pressure environment and configured for communicating with the electrical assembly located within the high temperature and pressure environment.
On-Orbit Gradiometry with the scientific instrument of the French Space Mission MICROSCOPE
NASA Astrophysics Data System (ADS)
Foulon, B.; Baghi, Q.; Panet, I.; Rodrigues, M.; Metris, G.; Touboul, P.
2017-12-01
The MICROSCOPE mission is fully dedicated to the in-orbit test of the universality of free fall, the so-called Weak Equivalence Principle (WEP). Based on a CNES Myriade microsatellite launched on the 25th of April 2016, MICROSCOPE is a CNES-ESA-ONERA-CNRS-OCA mission, the scientific objective of which is to test of the Equivalence Principle with an extraordinary accuracy at the level of 10-15. The measurement will be obtained from the T-SAGE (Twin Space Accelerometer for Gravitational Experimentation) instrument constituted by two ultrasensitive differential accelerometers. One differential electrostatic accelerometer, labeled SU-EP, contains, at its center, two proof masses made of Titanium and Platinum and is used for the test. The twin accelerometer, labeled SU-REF, contains two Platinum proof masses and is used as a reference instrument. Separated by a 17 cm-length arm, they are embarked in a very stable and soft environment on board a satellite equipped with a drag-free control system and orbiting on a sun synchronous circular orbit at 710 km above the Earth. In addition to the WEP test, this configuration can be interesting for various applications, and one of the proposed ideas is to use MICROSCOPE data for the measurement of Earth's gravitational gradient. Considering the gradiometer formed by the inner Platinum proof-masses of the two differential accelerometers and the arm along the Y-axis of the instrument which is perpendicular to the orbital plane, possibly 3 components of the gradient can be measured: Txy, Tyy and Tzy. Preliminary studies suggest that the errors can be lower than 10mE. Taking advantage of its higher altitude with respect to GOCE, the low frequency signature of Earth's potential seen by MICROSCOPE could provide an additional observable in gradiometry to discriminate between different models describing the large scales of the mass distribution in the Earth's deep mantle. The poster will shortly present the MICROSCOPE mission configuration. It will detail the actual in-flight performances of the accelerometers and of the attitude and position control, in order to evaluate the gradiometer error budget according to the satellite pointing mode configuration.
Adaptive Management Approach to Oil and Gas Activities in Areas Occupied by Pacific Walrus
NASA Astrophysics Data System (ADS)
Ireland, D.; Broker, K.; San Filippo, V.; Brzuzy, L.; Morse, L.
2016-12-01
During Shell's 2015 exploration drilling program in the Chukchi Sea, activities were conducted in accordance with a Letter of Authorization issued by the United States Fish and Wildlife Service that allowed the incidental harassment of Pacific Walrus and Polar Bears under the Marine Mammal Protection Act. As a part of the request for authorization, Shell proposed a process to monitor and assess the potential for activities to interact with walruses on ice, especially if ice posed a potential threat to the drill site. The process assimilated near real-time information from multiple data sources including vessel-based observations, aerial surveys, satellite-linked GPS tags on walrus, and satellite imagery of ice conditions and movements. These data were reviewed daily and assessed in the context of planned activities to assign a risk level (low, medium, or high). The risk level was communicated to all assets in the field and decision makers during morning briefings. A low risk level meant that planned activities could occur without further review. A medium risk level meant that some operations had a greater potential of interacting with walrus on ice and that additional discussions of those activities were required to determine the relative risk of potential impacts compare to the importance of the planned activity. A high risk level meant that the planned activities were necessary and walrus on ice were likely to be encountered. Assignment of a high risk level triggered contact with agency personnel and directly incorporated them into the assessment and decision making process. This process made effective use of relevant available information to provide meaningful assessments at temporal and spatial scales that allowed approved activities to proceed while minimizing potential impacts. More so, this process provides a valuable alternative to large-scale restriction areas with coarse temporal resolution without reducing protection to target species.
Adaptive Management Approach to Oil and Gas Activities in Areas Occupied by Pacific Walrus
NASA Astrophysics Data System (ADS)
Ireland, D.; Broker, K.; San Filippo, V.; Brzuzy, L.; Morse, L.
2016-02-01
During Shell's 2015 exploration drilling program in the Chukchi Sea, activities were conducted in accordance with a Letter of Authorization issued by the United States Fish and Wildlife Service that allowed the incidental harassment of Pacific Walrus and Polar Bears under the Marine Mammal Protection Act. As a part of the request for authorization, Shell proposed a process to monitor and assess the potential for activities to interact with walruses on ice, especially if ice posed a potential threat to the drill site. The process assimilated near real-time information from multiple data sources including vessel-based observations, aerial surveys, satellite-linked GPS tags on walrus, and satellite imagery of ice conditions and movements. These data were reviewed daily and assessed in the context of planned activities to assign a risk level (low, medium, or high). The risk level was communicated to all assets in the field and decision makers during morning briefings. A low risk level meant that planned activities could occur without further review. A medium risk level meant that some operations had a greater potential of interacting with walrus on ice and that additional discussions of those activities were required to determine the relative risk of potential impacts compare to the importance of the planned activity. A high risk level meant that the planned activities were necessary and walrus on ice were likely to be encountered. Assignment of a high risk level triggered contact with agency personnel and directly incorporated them into the assessment and decision making process. This process made effective use of relevant available information to provide meaningful assessments at temporal and spatial scales that allowed approved activities to proceed while minimizing potential impacts. More so, this process provides a valuable alternative to large-scale restriction areas with coarse temporal resolution without reducing protection to target species.
NASA Astrophysics Data System (ADS)
Knysh, Yu A.; Xanthopoulou, G. G.
2018-01-01
The object of the study is a catalytic combustion chamber that provides a highly efficient combustion process through the use of effects: heat recovery from combustion, microvortex heat transfer, catalytic reaction and acoustic resonance. High efficiency is provided by a complex of related technologies: technologies for combustion products heat transfer (recuperation) to initial mixture, catalytic processes technology, technology for calculating effective combustion processes based on microvortex matrices, technology for designing metamaterials structures and technology for obtaining the required topology product by laser fusion of metal powder compositions. The mesoscale level structure provides combustion process with the use of a microvortex effect with a high intensity of heat and mass transfer. High surface area (extremely high area-to-volume ratio) created due to nanoscale periodic structure and ensures catalytic reactions efficiency. Produced metamaterial is the first multiscale product of new concept which due to combination of different scale level periodic topologies provides qualitatively new set of product properties. This research is aimed at solving simultaneously two global problems of the present: ensure environmental safety of transport systems and power industry, as well as the economy and rational use of energy resources, providing humanity with energy now and in the foreseeable future.
Process connectivity in a naturally prograding river delta
NASA Astrophysics Data System (ADS)
Sendrowski, Alicia; Passalacqua, Paola
2017-03-01
River deltas are lowland systems that can display high hydrological connectivity. This connectivity can be structural (morphological connections), functional (control of fluxes), and process connectivity (information flow from system drivers to sinks). In this work, we quantify hydrological process connectivity in Wax Lake Delta, coastal Louisiana, by analyzing couplings among external drivers (discharge, tides, and wind) and water levels recorded at five islands and one channel over summer 2014. We quantify process connections with information theory, a branch of mathematics concerned with the communication of information. We represent process connections as a network; variables serve as network nodes and couplings as network links describing the strength, direction, and time scale of information flow. Comparing process connections at long (105 days) and short (10 days) time scales, we show that tides exhibit daily synchronization with water level, with decreasing strength from downstream to upstream, and that tides transfer information as tides transition from spring to neap. Discharge synchronizes with water level and the time scale of its information transfer compares well to physical travel times through the system, computed with a hydrodynamic model. Information transfer and physical transport show similar spatial patterns, although information transfer time scales are larger than physical travel times. Wind events associated with water level setup lead to increased process connectivity with highly variable information transfer time scales. We discuss the information theory results in the context of the hydrologic behavior of the delta, the role of vegetation as a connector/disconnector on islands, and the applicability of process networks as tools for delta modeling results.
NASA Astrophysics Data System (ADS)
Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd
2017-05-01
This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.
Lead iron phosphate glass as a containment medium for disposal of high-level nuclear waste
Boatner, Lynn A.; Sales, Brian C.
1989-01-01
Lead-iron phosphate glasses containing a high level of Fe.sub.2 O.sub.3 for use as a storage medium for high-level radioactive nuclear waste. By combining lead-iron phosphate glass with various types of simulated high-level nuclear waste, a highly corrosion resistant, homogeneous, easily processed glass can be formed. For corroding solutions at 90.degree. C., with solution pH values in the range between 5 and 9, the corrosion rate of the lead-iron phosphate nuclear waste glass is at least 10.sup.2 to 10.sup.3 times lower than the corrosion rate of a comparable borosilicate nuclear waste glass. The presence of Fe.sub.2 O.sub.3 in forming the lead-iron phosphate glass is critical. Lead-iron phosphate nuclear waste glass can be prepared at temperatures as low as 800.degree. C., since they exhibit very low melt viscosities in the 800.degree. to 1050.degree. C. temperature range. These waste-loaded glasses do not readily devitrify at temperatures as high as 550.degree. C. and are not adversely affected by large doses of gamma radiation in H.sub.2 O at 135.degree. C. The lead-iron phosphate waste glasses can be prepared with minimal modification of the technology developed for processing borosilicate glass nuclear wasteforms.
Effects of Sequences of Cognitions on Group Performance Over Time
Molenaar, Inge; Chiu, Ming Ming
2017-01-01
Extending past research showing that sequences of low cognitions (low-level processing of information) and high cognitions (high-level processing of information through questions and elaborations) influence the likelihoods of subsequent high and low cognitions, this study examines whether sequences of cognitions are related to group performance over time; 54 primary school students (18 triads) discussed and wrote an essay about living in another country (32,375 turns of talk). Content analysis and statistical discourse analysis showed that within each lesson, groups with more low cognitions or more sequences of low cognition followed by high cognition added more essay words. Groups with more high cognitions, sequences of low cognition followed by low cognition, or sequences of high cognition followed by an action followed by low cognition, showed different words and sequences, suggestive of new ideas. The links between cognition sequences and group performance over time can inform facilitation and assessment of student discussions. PMID:28490854
Effects of Sequences of Cognitions on Group Performance Over Time.
Molenaar, Inge; Chiu, Ming Ming
2017-04-01
Extending past research showing that sequences of low cognitions (low-level processing of information) and high cognitions (high-level processing of information through questions and elaborations) influence the likelihoods of subsequent high and low cognitions, this study examines whether sequences of cognitions are related to group performance over time; 54 primary school students (18 triads) discussed and wrote an essay about living in another country (32,375 turns of talk). Content analysis and statistical discourse analysis showed that within each lesson, groups with more low cognitions or more sequences of low cognition followed by high cognition added more essay words. Groups with more high cognitions, sequences of low cognition followed by low cognition, or sequences of high cognition followed by an action followed by low cognition, showed different words and sequences, suggestive of new ideas. The links between cognition sequences and group performance over time can inform facilitation and assessment of student discussions.
The Role of Independent V&V in Upstream Software Development Processes
NASA Technical Reports Server (NTRS)
Easterbrook, Steve
1996-01-01
This paper describes the role of Verification and Validation (V&V) during the requirements and high level design processes, and in particular the role of Independent V&V (IV&V). The job of IV&V during these phases is to ensure that the requirements are complete, consistent and valid, and to ensure that the high level design meets the requirements. This contrasts with the role of Quality Assurance (QA), which ensures that appropriate standards and process models are defined and applied. This paper describes the current state of practice for IV&V, concentrating on the process model used in NASA projects. We describe a case study, showing the processes by which problem reporting and tracking takes place, and how IV&V feeds into decision making by the development team. We then describe the problems faced in implementing IV&V. We conclude that despite a well defined process model, and tools to support it, IV&V is still beset by communication and coordination problems.
High-autonomy control of space resource processing plants
NASA Technical Reports Server (NTRS)
Schooley, Larry C.; Zeigler, Bernard P.; Cellier, Francois E.; Wang, Fei-Yue
1993-01-01
A highly autonomous intelligent command/control architecture has been developed for planetary surface base industrial process plants and Space Station Freedom experimental facilities. The architecture makes use of a high-level task-oriented mode with supervisory control from one or several remote sites, and integrates advanced network communications concepts and state-of-the-art man/machine interfaces with the most advanced autonomous intelligent control. Attention is given to the full-dynamics model of a Martian oxygen-production plant, event-based/fuzzy-logic process control, and fault management practices.
The reliability of continuous brain responses during naturalistic listening to music.
Burunat, Iballa; Toiviainen, Petri; Alluri, Vinoo; Bogert, Brigitte; Ristaniemi, Tapani; Sams, Mikko; Brattico, Elvira
2016-01-01
Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music. Copyright © 2015 Elsevier Inc. All rights reserved.
Profile of mathematics anxiety of 7th graders
NASA Astrophysics Data System (ADS)
Udil, Patrisius Afrisno; Kusmayadi, Tri Atmojo; Riyadi
2017-08-01
Mathematics anxiety is one of the important factors affect students mathematics achievement. This present research investigates profile of students' mathematics anxiety. This research focuses on analysis and description of students' mathematics anxiety level generally and its dominant domain and aspect. Qualitative research with case study strategy was used in this research. Subject in this research involved 15 students of 7th grade chosen with purposive sampling. Data in this research were students' mathematics anxiety scale result, interview record, and observation result during both mathematics learning activity and test. They were asked to complete mathematics anxiety scale before interviewed and observed. The results show that generally students' mathematics anxiety was identified in the moderate level. In addition, students' mathematics anxiety during mathematics test was identified in the high level, but it was in the moderate level during mathematics learning process. Based on the anxiety domain, students have a high mathematics anxiety on cognitive domain, while it was in the moderate level for psychological and physiological domains. On the other hand, it was identified in low level for psychological domain during mathematics learning process. Therefore, it can be concluded that students have serious and high anxiety regarding mathematics on the cognitive domain and mathematics test aspect.
Prototype architecture for a VLSI level zero processing system. [Space Station Freedom
NASA Technical Reports Server (NTRS)
Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.
1989-01-01
The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.
Nickel Base Superalloy Turbine Disk
NASA Technical Reports Server (NTRS)
Gabb, Timothy P. (Inventor); Gauda, John (Inventor); Telesman, Ignacy (Inventor); Kantzos, Pete T. (Inventor)
2005-01-01
A low solvus, high refractory alloy having unusually versatile processing mechanical property capabilities for advanced disks and rotors in gas turbine engines. The nickel base superalloy has a composition consisting essentially of, in weight percent, 3.0-4.0 N, 0.02-0.04 B, 0.02-0.05 C, 12.0-14.0 Cr, 19.0-22.0 Co, 2.0-3.5 Mo, greater than 1.0 to 2.1 Nb, 1.3 to 2.1 Ta,3.04.OTi,4.1 to 5.0 W, 0.03-0.06 Zr, and balance essentially Ni and incidental impurities. The superalloy combines ease of processing with high temperature capabilities to be suitable for use in various turbine engine disk, impeller, and shaft applications. The Co and Cr levels of the superalloy can provide low solvus temperature for high processing versatility. The W, Mo, Ta, and Nb refractory element levels of the superalloy can provide sustained strength, creep, and dwell crack growth resistance at high temperatures.
NASA Astrophysics Data System (ADS)
Chen, Xiaolong; Honda, Hiroshi; Kuroda, Seiji; Araki, Hiroshi; Murakami, Hideyuki; Watanabe, Makoto; Sakka, Yoshio
2016-12-01
Effects of the ceramic powder size used for suspension as well as several processing parameters in suspension plasma spraying of YSZ were investigated experimentally, aiming to fabricate highly segmented microstructures for thermal barrier coating (TBC) applications. Particle image velocimetry (PIV) was used to observe the atomization process and the velocity distribution of atomized droplets and ceramic particles travelling toward the substrates. The tested parameters included the secondary plasma gas (He versus H2), suspension injection flow rate, and substrate surface roughness. Results indicated that a plasma jet with a relatively higher content of He or H2 as the secondary plasma gas was critical to produce highly segmented YSZ TBCs with a crack density up to 12 cracks/mm. The optimized suspension flow rate played an important role to realize coatings with a reduced porosity level and improved adhesion. An increased powder size and higher operation power level were beneficial for the formation of highly segmented coatings onto substrates with a wider range of surface roughness.
The FORCE - A highly portable parallel programming language
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger
1989-01-01
This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.
The FORCE: A highly portable parallel programming language
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger
1989-01-01
Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.
Energy Transfer to Upper Trophic Levels on a Small Offshore Bank
2009-09-30
Observations from Platts Bank and other feeding hotspots in the Gulf of Maine show that high levels of feeding activity are ephemeral—sometimes...feeding hotspots in the Gulf of Maine show that high levels of feeding activity are ephemeral?sometimes very active, often not. Differences can exist...defining patterns of biodiversity in the oceans and the processes that shape them. The Gulf of Maine program has the additional aim of describing how
Olsen, Shira A; Beck, J Gayle
2012-01-01
This study investigated the effects of high and low levels of dissociation on information processing for analogue trauma and neutral stimuli. Fifty-four undergraduate females who reported high and low levels of trait dissociation were presented with two films, one depicting traumatic events, the other containing neutral material. Participants completed a divided attention task (yielding a proxy measure of attention), as well as explicit memory (free-recall) and implicit memory (word-stem completion) tasks for both films. Results indicated that the high DES group showed less attention and had poorer recall for the analogue trauma stimuli, relative to the neutral stimuli and the low DES group. These findings suggest that high levels of trait dissociation are associated with reductions in attention and memory for analogue trauma stimuli, relative to neutral stimuli and relative to low trait dissociation. Implications for the role of cognitive factors in the etiology of negative post-trauma responses are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
In these studies liquid hot water (LHW) pretreated and enzymatically hydrolyzed Sweet Sorghum Bagasse (SSB) hydrolyzates were fermented in a fed-batch reactor. As reported in the preceding paper, the culture was not able to ferment the hydrolyzate I in a batch process due to presence of high level o...
ERIC Educational Resources Information Center
Rahman, Abdul; Ahmar, Ansari Saleh
2016-01-01
Several studies suggest that most students are not in the same level of development (Slavin, 2008). From concrete operation level to formal operation level, students experience lateness in the transition phase. Consequently, students feel difficulty in solving mathematics problems. Method research is a qualitatively descriptive-explorative…
How gender-expectancy affects the processing of "them".
Doherty, Alice; Conklin, Kathy
2017-04-01
How sensitive is pronoun processing to expectancies based on real-world knowledge and language usage? The current study links research on the integration of gender stereotypes and number-mismatch to explore this question. It focuses on the use of them to refer to antecedents of different levels of gender-expectancy (low-cyclist, high-mechanic, known-spokeswoman). In a rating task, them is considered increasingly unnatural with greater gender-expectancy. However, participants might not be able to differentiate high-expectancy and gender-known antecedents online because they initially search for plural antecedents (e.g., Sanford & Filik), and they make all-or-nothing gender inferences. An eye-tracking study reveals early differences in the processing of them with antecedents of high gender-expectancy compared with gender-known antecedents. This suggests that participants have rapid access to the expected gender of the antecedent and the level of that expectancy.
Solar silicon via improved and expanded metallurgical silicon technology
NASA Technical Reports Server (NTRS)
Hunt, L. P.; Dosaj, V. D.; Mccormick, J. R.
1977-01-01
A completed preliminary survey of silica sources indicates that sufficient quantities of high-purity quartz are available in the U.S. and Canada to meet goals. Supply can easily meet demand for this little-sought commodity. Charcoal, as a reductant for silica, can be purified to a sufficient level by high-temperature fluorocarbon treatment and vacuum processing. High-temperature treatment causes partial graphitization which can lead to difficulty in smelting. Smelting of Arkansas quartz and purified charcoal produced kilogram quantities of silicon having impurity levels generally much lower than in MG-Si. Half of the goal was met of increasing the boron resistivity from 0.03 ohm-cm in metallurgical silicon to 0.3 ohm-cm in solar silicon. A cost analysis of the solidification process indicate $3.50-7.25/kg Si for the Czochralski-type process and $1.50-4.25/kg Si for the Bridgman-type technique.
Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki
2013-01-01
Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.
A Social Approach to High-Level Context Generation for Supporting Context-Aware M-Learning
ERIC Educational Resources Information Center
Pan, Xu-Wei; Ding, Ling; Zhu, Xi-Yong; Yang, Zhao-Xiang
2017-01-01
In m-learning environments, context-awareness is for wide use where learners' situations are varied, dynamic and unpredictable. We are facing the challenge of requirements of both generality and depth in generating and processing high-level context. In this paper, we present a social approach which exploits social dynamics and social computing for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickford, D.F.; Congdon, J.W.; Oblath, S.B.
1987-01-01
At the U.S. Department of Energy's Savannah River Plant, corrosion of carbon steel storage tanks containing alkaline, high-level radioactive waste is controlled by specification of limits on waste composition and temperature. Processes for the preparation of waste for final disposal will result in waste with low corrosion inhibitor concentrations and, in some cases, high aromatic organic concentrations, neither of which are characteristic of previous operations. Laboratory tests, conducted to determine minimum corrosion inhibitor levels indicated pitting of carbon steel near the waterline for proposed storage conditions. In situ electrochemical measurements of full-scale radioactive process demonstrations have been conducted to assessmore » the validity of laboratory tests. Probes included pH, Eh (potential relative to a standard hydrogen electrode), tank potential, and alloy coupons. In situ results are compared to those of the laboratory tests, with particular regard given to simulated solution composition.« less
Near-field environment/processes working group summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, W.M.
1995-09-01
This article is a summary of the proceedings of a group discussion which took place at the Workshop on the Role of Natural Analogs in Geologic Disposal of High-Level Nuclear Waste in San Antonio, Texas on July 22-25, 1991. The working group concentrated on the subject of the near-field environment to geologic repositories for high-level nuclear waste. The near-field environment may be affected by thermal perturbations from the waste, and by disturbances caused by the introduction of exotic materials during construction of the repository. This group also discussed the application of modelling of performance-related processes.
Design of signal reception and processing system of embedded ultrasonic endoscope
NASA Astrophysics Data System (ADS)
Li, Ming; Yu, Feng; Zhang, Ruiqiang; Li, Yan; Chen, Xiaodong; Yu, Daoyin
2009-11-01
Embedded Ultrasonic Endoscope, based on embedded microprocessor and embedded real-time operating system, sends a micro ultrasonic probe into coelom through the biopsy channel of the Electronic Endoscope to get the fault histology features of digestive organs by rotary scanning, and acquires the pictures of the alimentary canal mucosal surface. At the same time, ultrasonic signals are processed by signal reception and processing system, forming images of the full histology of the digestive organs. Signal Reception and Processing System is an important component of Embedded Ultrasonic Endoscope. However, the traditional design, using multi-level amplifiers and special digital processing circuits to implement signal reception and processing, is no longer satisfying the standards of high-performance, miniaturization and low power requirements that embedded system requires, and as a result of the high noise that multi-level amplifier brought, the extraction of small signal becomes hard. Therefore, this paper presents a method of signal reception and processing based on double variable gain amplifier and FPGA, increasing the flexibility and dynamic range of the Signal Reception and Processing System, improving system noise level, and reducing power consumption. Finally, we set up the embedded experiment system, using a transducer with the center frequency of 8MHz to scan membrane samples, and display the image of ultrasonic echo reflected by each layer of membrane, with a frame rate of 5Hz, verifying the correctness of the system.
de la Fuente, Jesús; Sander, Paul; Martínez-Vicente, José M; Vera, Mariano; Garzón, Angélica; Fadda, Salvattore
2017-01-01
The Theory of Self- vs . Externally-Regulated Learning™ (SRL vs. ERL) proposed different types of relationships among levels of variables in Personal Self-Regulation (PSR) and Regulatory Teaching (RT) to predict the meta-cognitive, meta-motivational and -emotional variables of learning, and of Academic Achievement in Higher Education. The aim of this investigation was empirical in order to validate the model of the combined effect of low-medium-high levels in PSR and RT on the dependent variables. For the analysis of combinations, a selected sample of 544 undergraduate students from two Spanish universities was used. Data collection was obtained from validated instruments, in Spanish versions. Using an ex-post-facto design, different Univariate and Multivariate Analyses (3 × 1, 3 × 3, and 4 × 1) were conducted. Results provide evidence for a consistent effect of low-medium-high levels of PSR and of RT, thus giving significant partial confirmation of the proposed rational model. As predicted, (1) the levels of PSR and positively and significantly effected the levels of learning approaches, resilience, engagement, academic confidence, test anxiety, and procedural and attitudinal academic achievement; (2) the most favorable type of interaction was a high level of PSR with a high level RT process. The limitations and implications of these results in the design of effective teaching are analyzed, to improve university teaching-learning processes.
de la Fuente, Jesús; Sander, Paul; Martínez-Vicente, José M.; Vera, Mariano; Garzón, Angélica; Fadda, Salvattore
2017-01-01
The Theory of Self- vs. Externally-Regulated Learning™ (SRL vs. ERL) proposed different types of relationships among levels of variables in Personal Self-Regulation (PSR) and Regulatory Teaching (RT) to predict the meta-cognitive, meta-motivational and -emotional variables of learning, and of Academic Achievement in Higher Education. The aim of this investigation was empirical in order to validate the model of the combined effect of low-medium-high levels in PSR and RT on the dependent variables. For the analysis of combinations, a selected sample of 544 undergraduate students from two Spanish universities was used. Data collection was obtained from validated instruments, in Spanish versions. Using an ex-post-facto design, different Univariate and Multivariate Analyses (3 × 1, 3 × 3, and 4 × 1) were conducted. Results provide evidence for a consistent effect of low-medium-high levels of PSR and of RT, thus giving significant partial confirmation of the proposed rational model. As predicted, (1) the levels of PSR and positively and significantly effected the levels of learning approaches, resilience, engagement, academic confidence, test anxiety, and procedural and attitudinal academic achievement; (2) the most favorable type of interaction was a high level of PSR with a high level RT process. The limitations and implications of these results in the design of effective teaching are analyzed, to improve university teaching-learning processes. PMID:28280473
Material Processing Opportunites Utilizing a Free Electron Laser
NASA Astrophysics Data System (ADS)
Todd, Alan
1996-11-01
Many properties of photocathode-driven Free Electron Lasers (FEL) are extremely attractive for material processing applications. These include: 1) broad-band tunability across the IR and UV spectra which permits wavelength optimization, depth deposition control and utilization of resonance phenomena; 2) picosecond pulse structure with continuous nanosecond spacing for optimum deposition efficiency and minimal collateral damage; 3) high peak and average radiated power for economic processing in quantity; and 4) high brightness for spatially defined energy deposition and intense energy density in small spots. We discuss five areas: polymer, metal and electronic material processing, micromachining and defense applications; where IR or UV material processing will find application if the economics is favorable. Specific examples in the IR and UV, such as surface texturing of polymers for improved look and feel, and anti-microbial food packaging films, which have been demonstrated using UV excimer lamps and lasers, will be given. Unfortunately, although the process utility is readily proven, the power levels and costs of lamps and lasers do not scale to production margins. However, from these examples, application specific cost targets ranging from 0.1=A2/kJ to 10=A2/kJ of delivered radiation at power levels from 10 kW to 500 kW, have been developed and are used to define strawman FEL processing systems. Since =46EL radiation energy extraction from the generating electron beam is typically a few percent, at these high average power levels, economic considerations dictate the use of a superconducting RF accelerator with energy recovery to minimize cavity and beam dump power loss. Such a 1 kW IR FEL, funded by the US Navy, is presently under construction at the Thomas Jefferson National Accelerator Facility. This dual-use device, scheduled to generate first light in late 1997, will test both the viability of high-power FELs for shipboard self-defense against cruise missiles, and for the first time, provide an industrial testbed capable of processing various materials in market evaluation quantities.
Exogenous cortisol facilitates responses to social threat under high provocation.
Bertsch, Katja; Böhnke, Robina; Kruk, Menno R; Richter, Steffen; Naumann, Ewald
2011-04-01
Stress is one of the most important promoters of aggression. Human and animal studies have found associations between basal and acute levels of the stress hormone cortisol and (abnormal) aggression. Irrespective of the direction of these changes--i.e., increased or decreased aggressive behavior--the results of these studies suggest dramatic alterations in the processing of threat-related social information. Therefore, the effects of cortisol and provocation on social information processing were addressed by the present study. After a placebo-controlled pharmacological manipulation of acute cortisol levels, we exposed healthy individuals to high or low levels of provocation in a competitive aggression paradigm. Influences of cortisol and provocation on emotional face processing were then investigated with reaction times and event-related potentials (ERPs) in an emotional Stroop task. In line with previous results, enhanced early and later positive, posterior ERP components indicated a provocation-induced enhanced relevance for all kinds of social information. Cortisol, however, reduced an early frontocentral bias for angry faces and--despite the provocation-enhancing relevance--led to faster reactions for all facial expressions in highly provoked participants. The results thus support the moderating role of social information processing in the 'vicious circle of stress and aggression'. Copyright © 2010 Elsevier Inc. All rights reserved.
Food processing by high hydrostatic pressure.
Yamamoto, Kazutaka
2017-04-01
High hydrostatic pressure (HHP) process, as a nonthermal process, can be used to inactivate microbes while minimizing chemical reactions in food. In this regard, a HHP level of 100 MPa (986.9 atm/1019.7 kgf/cm 2 ) and more is applied to food. Conventional thermal process damages food components relating color, flavor, and nutrition via enhanced chemical reactions. However, HHP process minimizes the damages and inactivates microbes toward processing high quality safe foods. The first commercial HHP-processed foods were launched in 1990 as fruit products such as jams, and then some other products have been commercialized: retort rice products (enhanced water impregnation), cooked hams and sausages (shelf life extension), soy sauce with minimized salt (short-time fermentation owing to enhanced enzymatic reactions), and beverages (shelf life extension). The characteristics of HHP food processing are reviewed from viewpoints of nonthermal process, history, research and development, physical and biochemical changes, and processing equipment.
Implementation of a VLSI Level Zero Processing system utilizing the functional component approach
NASA Technical Reports Server (NTRS)
Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.
1991-01-01
A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.
NASA Astrophysics Data System (ADS)
Snapir, Zohar; Eberbach, Catherine; Ben-Zvi-Assaraf, Orit; Hmelo-Silver, Cindy; Tripto, Jaklin
2017-10-01
Science education today has become increasingly focused on research into complex natural, social and technological systems. In this study, we examined the development of high-school biology students' systems understanding of the human body, in a three-year longitudinal study. The development of the students' system understanding was evaluated using the Components Mechanisms Phenomena (CMP) framework for conceptual representation. We coded and analysed the repertory grid personal constructs of 67 high-school biology students at 4 points throughout the study. Our data analysis builds on the assumption that systems understanding entails a perception of all the system categories, including structures within the system (its Components), specific processes and interactions at the macro and micro levels (Mechanisms), and the Phenomena that present the macro scale of processes and patterns within a system. Our findings suggest that as the learning process progressed, the systems understanding of our students became more advanced, moving forward within each of the major CMP categories. Moreover, there was an increase in the mechanism complexity presented by the students, manifested by more students describing mechanisms at the molecular level. Thus, the 'mechanism' category and the micro level are critical components that enable students to understand system-level phenomena such as homeostasis.
Nasri Nasrabadi, Mohammad Reza; Razavi, Seyed Hadi
2010-04-01
In this work, we applied statistical experimental design to a fed-batch process for optimization of tricarboxylic acid cycle (TCA) intermediates in order to achieve high-level production of canthaxanthin from Dietzia natronolimnaea HS-1 cultured in beet molasses. A fractional factorial design (screening test) was first conducted on five TCA cycle intermediates. Out of the five TCA cycle intermediates investigated via screening tests, alfaketoglutarate, oxaloacetate and succinate were selected based on their statistically significant (P<0.05) and positive effects on canthaxanthin production. These significant factors were optimized by means of response surface methodology (RSM) in order to achieve high-level production of canthaxanthin. The experimental results of the RSM were fitted with a second-order polynomial equation by means of a multiple regression technique to identify the relationship between canthaxanthin production and the three TCA cycle intermediates. By means of this statistical design under a fed-batch process, the optimum conditions required to achieve the highest level of canthaxanthin (13172 + or - 25 microg l(-1)) were determined as follows: alfaketoglutarate, 9.69 mM; oxaloacetate, 8.68 mM; succinate, 8.51 mM. Copyright 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Technical and economical evaluation of carbon dioxide capture and conversion to methanol process
NASA Astrophysics Data System (ADS)
Putra, Aditya Anugerah; Juwari, Handogo, Renanto
2017-05-01
Phenomenon of global warming, which is indicated by increasing of earth's surface temperature, is caused by high level of greenhouse gases level in the atmosphere. Carbon dioxide, which increases year by year because of high demand of energy, gives the largest contribution in greenhouse gases. One of the most applied solution to mitigate carbon dioxide level is post-combustion carbon capture technology. Although the technology can absorb up to 90% of carbon dioxide produced, some worries occur that captured carbon dioxide that is stored underground will be released over time. Utilizing captured carbon dioxide could be a promising solution. Captured carbon dioxide can be converted into more valuable material, such as methanol. This research will evaluate the conversion process of captured carbon dioxide to methanol, technically and economically. From the research, it is found that technically methanol can be made from captured carbon dioxide. Product gives 25.6905 kg/s flow with 99.69% purity of methanol. Economical evaluation of the whole conversion process shows that the process is economically feasible. The capture and conversion process needs 176,101,157.69 per year for total annual cost and can be overcome by revenue gained from methanol product sales.
Keenan, Derek F; Brunton, Nigel; Gormley, Ronan; Butler, Francis
2011-01-26
The aim of the present study was the evaluation of high hydrostatic pressure (HHP) processing on the levels of polyphenolic compounds and selected quality attributes of fruit smoothies compared to fresh and mild conventional pasteurization processing. Fruit smoothie samples were thermally (P(70) > 10 min) or HHP processed (450 MPa/1, 3, or 5 min/20 °C) (HHP1, HHP3, and HHP5, respectively). The polyphenolic content, color difference (ΔE), sensory acceptability, and rheological (G'; G''; G*) properties of the smoothies were assessed over a storage period of 30 days at 4 °C. Processing had a significant effect (p < 0.001) on the levels of polyphenolic compounds in smoothies. However, this effect was not consistent for all compound types. HHP processed samples (HHP1 and HHP3) had higher (p < 0.001) levels of phenolic compounds, for example, procyanidin B1 and hesperidin, than HHP5 samples. Levels of flavanones and hydroxycinnamic acid compounds decreased (p < 0.001) after 30 days of storage at 2-4 °C). Decreases were particularly notable between days 10 and 20 (hesperidin) and days 20 and 30 (chlorogenic acid) (p < 0.001). There was a wide variation in ΔE values recorded over the 30 day storage period (p < 0.001), with fresh and thermally processed smoothies exhibiting lower color change than their HHP counterparts (p < 0.001). No effect was observed for the type of process on complex modulus (G*) data, but all smoothies became less rigid during the storage period (p < 0.001). Despite minor product deterioration during storage (p < 0.001), sensory acceptability scores showed no preference for either fresh or processed (thermal/HHP) smoothies, which were deemed acceptable (>3) by panelists.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Process Development for Hydrothermal Liquefaction of Algae Feedstocks in a Continuous-Flow Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas C.; Hart, Todd R.; Schmidt, Andrew J.
Wet algae slurries can be converted into an upgradeable biocrude by hydrothermal liquefaction (HTL). High levels of carbon conversion to gravity-separable biocrude product were accomplished at relatively low temperature (350 °C) in a continuous-flow, pressurized (sub-critical liquid water) environment (20 MPa). As opposed to earlier work in batch reactors reported by others, direct oil recovery was achieved without the use of a solvent and biomass trace components were removed by processing steps so that they did not cause process difficulties. High conversions were obtained even with high slurry concentrations of up to 35 wt% of dry solids. Catalytic hydrotreating wasmore » effectively applied for hydrodeoxygenation, hydrodenitrogenation, and hydrodesulfurization of the biocrude to form liquid hydrocarbon fuel. Catalytic hydrothermal gasification was effectively applied for HTL byproduct water cleanup and fuel gas production from water soluble organics, allowing the water to be considered for recycle of nutrients to the algae growth ponds. As a result, high conversion of algae to liquid hydrocarbon and gas products was found with low levels of organic contamination in the byproduct water. All three process steps were accomplished in bench-scale, continuous-flow reactor systems such that design data for process scale-up was generated.« less
NASA Astrophysics Data System (ADS)
McKelvey, David; Menary, Gary; Martin, Peter; Yan, Shiyong
2017-10-01
The thermoforming process involves a previously extruded sheet of material being reheated to a softened state below the melting temperature and then forced into a mould either by a plug, air pressure or a combination of both. Thermoplastics such as polystyrene (PS) and polypropylene (PP) are commonly processed via thermoforming for products in the packaging industry. However, high density polyethylene (HDPE) is generally not processed via thermoforming and yet HDPE is extensively processed throughout the packaging industry. The aim of this study was to investigate the potential of thermoforming HDPE. The objectives were to firstly investigate the mechanical response under comparable loading conditions and secondly, to investigate the final mechanical properties post-forming. Obtaining in-process stress-strain behavior during thermoforming is extremely challenging if not impossible. To overcome this limitation the processing conditions were replicated offline using the QUB biaxial stretcher. Typical processing conditions that the material will experience during the process are high strain levels, high strain rates between 0.1-10s-1 and high temperatures in the solid phase (1). Dynamic Mechanical Analysis (DMA) was used to investigate the processing range of the HDPE grade used in this study, a peak in the tan delta curve was observed just below the peak melting temperature and hence, a forming temperature was selected in this range. HPDE was biaxially stretched at 128°C at a strain rate of 4s-1, under equal biaxial deformation (EB). The results showed a level of biaxial orientation was induced which was accompanied by an increase in the modulus from 606 MPa in the non-stretched sample to 1212MPa in the stretched sample.
NASA Astrophysics Data System (ADS)
Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.; Silvestre, E.
2017-09-01
The roll levelling is a flattening process used to remove the residual stresses and imperfections of metal strips by means of plastic deformations. During the process, the metal sheet is subjected to cyclic tension-compression deformations leading to a flat product. The process is especially important to avoid final geometrical errors when coils are cold formed or when thick plates are cut by laser. In the last years, and due to the appearance of high strength materials such as Ultra High Strength Steels, machine design engineers are demanding reliable tools for the dimensioning of the levelling facilities. Like in other metal forming fields, finite element analysis seems to be the most widely used solution to understand the occurring phenomena and to calculate the processing loads. In this paper, the roll levelling process of the third generation Fortiform 1050 steel is numerically analysed. The process has been studied using the MSC MARC software and two different material laws. A pure isotropic hardening law has been used and set as the baseline study. In the second part, tension-compression tests have been carried out to analyse the cyclic behaviour of the steel. With the obtained data, a new material model using a combined isotropic-kinematic hardening formulation has been fitted. Finally, the influence of the material model in the numerical results has been analysed by comparing a pure isotropic model and the later combined mixed hardening model.
Facility Monitoring: A Qualitative Theory for Sensor Fusion
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2001-01-01
Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.
Membrane processes in biotechnology: an overview.
Charcosset, Catherine
2006-01-01
Membrane processes are increasingly reported for various applications in both upstream and downstream technology, such as the established ultrafiltration and microfiltration, and emerging processes as membrane bioreactors, membrane chromatography, and membrane contactors for the preparation of emulsions and particles. Membrane systems exploit the inherent properties of high selectivity, high surface-area-per-unit-volume, and their potential for controlling the level of contact and/or mixing between two phases. This review presents these various membrane processes by focusing more precisely on membrane materials, module design, operating parameters and the large range of possible applications.
Similarities in neural activations of face and Chinese character discrimination.
Liu, Jiangang; Tian, Jie; Li, Jun; Gong, Qiyong; Lee, Kang
2009-02-18
This study compared Chinese participants' visual discrimination of Chinese faces with that of Chinese characters, which are highly similar to faces on a variety of dimensions. Both Chinese faces and characters activated the bilateral middle fusiform with high levels of correlations. These findings suggest that although the expertise systems for faces and written symbols are known to be anatomically differentiated at the later stages of processing to serve face processing or written-symbol-specific processing purposes, they may share similar neural structures in the ventral occipitotemporal cortex at the stages of visual processing.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
Occupational injuries and sick leaves in household moving works.
Hwan Park, Myoung; Jeong, Byung Yong
2017-09-01
This study is concerned with household moving works and the characteristics of occupational injuries and sick leaves in each step of the moving process. Accident data for 392 occupational accidents were categorized by the moving processes in which the accidents occurred, and possible incidents and sick leaves were assessed for each moving process and hazard factor. Accidents occurring during specific moving processes showed different characteristics depending on the type of accident and agency of accidents. The most critical form in the level of risk management was falls from a height in the 'lifting by ladder truck' process. Incidents ranked as a 'High' level of risk management were in the forms of slips, being struck by objects and musculoskeletal disorders in the 'manual materials handling' process. Also, falls in 'loading/unloading', being struck by objects during 'lifting by ladder truck' and driving accidents in the process of 'transport' were ranked 'High'. The findings of this study can be used to develop more effective accident prevention policy reflecting different circumstances and conditions to reduce occupational accidents in household moving works.
Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime
2014-10-31
molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we
Ashkenazi, Sarit
2018-02-05
Current theoretical approaches suggest that mathematical anxiety (MA) manifests itself as a weakness in quantity manipulations. This study is the first to examine automatic versus intentional processing of numerical information using the numerical Stroop paradigm in participants with high MA. To manipulate anxiety levels, we combined the numerical Stroop task with an affective priming paradigm. We took a group of college students with high MA and compared their performance to a group of participants with low MA. Under low anxiety conditions (neutral priming), participants with high MA showed relatively intact number processing abilities. However, under high anxiety conditions (mathematical priming), participants with high MA showed (1) higher processing of the non-numerical irrelevant information, which aligns with the theoretical view regarding deficits in selective attention in anxiety and (2) an abnormal numerical distance effect. These results demonstrate that abnormal, basic numerical processing in MA is context related.
Pragmatic Comprehension of High and Low Level Language Learners
ERIC Educational Resources Information Center
Garcia, Paula
2004-01-01
This study compares the performances of 16 advanced and 19 beginning English language learners on a listening comprehension task that focused on linguistic and pragmatic processing. Processing pragmatic meaning differs from processing linguistic meaning because pragmatic meaning requires the listener to understand not only linguistic information,…
Potential uses for cuphea oil processing byproducts and processed oils
USDA-ARS?s Scientific Manuscript database
Cuphea spp. has seeds that contain high levels of medium chain fatty acids and have the potential to be commercially cultivated. In the course of processing and refining Cuphea oil a number of bi-products are generated. Developing commercial uses for these bi-products would improve the economics of...
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Cadesky, Lee; Walkling-Ribeiro, Markus; Kriner, Kyle T; Karwe, Mukund V; Moraru, Carmen I
2017-09-01
Reconstituted micellar casein concentrates and milk protein concentrates of 2.5 and 10% (wt/vol) protein concentration were subjected to high-pressure processing at pressures from 150 to 450 MPa, for 15 min, at ambient temperature. The structural changes induced in milk proteins by high-pressure processing were investigated using a range of physical, physicochemical, and chemical methods, including dynamic light scattering, rheology, mid-infrared spectroscopy, scanning electron microscopy, proteomics, and soluble mineral analyses. The experimental data clearly indicate pressure-induced changes of casein micelles, as well as denaturation of serum proteins. Calcium-binding α S1 - and α S2 -casein levels increased in the soluble phase after all pressure treatments. Pressurization up to 350 MPa also increased levels of soluble calcium and phosphorus, in all samples and concentrations, whereas treatment at 450 MPa reduced the levels of soluble Ca and P. Experimental data suggest dissociation of calcium phosphate and subsequent casein micelle destabilization as a result of pressure treatment. Treatment of 10% micellar casein concentrate and 10% milk protein concentrate samples at 450 MPa resulted in weak, physical gels, which featured aggregates of uniformly distributed, casein substructures of 15 to 20 nm in diameter. Serum proteins were significantly denatured by pressures above 250 MPa. These results provide information on pressure-induced changes in high-concentration protein systems, and may inform the development on new milk protein-based foods with novel textures and potentially high nutritional quality, of particular interest being the soft gel structures formed at high pressure levels. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension
ERIC Educational Resources Information Center
Ostarek, Markus; Huettig, Falk
2017-01-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…
Acquisition of a High Voltage/High resolution Transmission Electron Microscope.
1988-08-21
microstructural design starts at the nanometer level. One such method is colloidal processing of materials with ultrafine particles in which particle...applications in the colloidal processing of ceramics with ultrafine particles . Aftervards, nanometer-sized particles will be synthesized and...STRUCTURAL CONTROL WITH ULTRAFINE PARTICLES Jun Liu. Mehmet Sarikaya, and I. A. Aksay Department of Materials Science and Engineering. Advanced
ERIC Educational Resources Information Center
Koran, Selcuk
2015-01-01
Teacher motivation is one of the primary variables of students' high performance. It is experienced that students whose teachers are highly motivated are more engaged in the learning process. Therefore, it's mostly the teacher who determines the level of success or failure in achieving institution's goal in the educational process. Thus, teachers…
Dan Neary
2016-01-01
Forested catchments throughout the world are known for producing high quality water for human use. In the 20th Century, experimental forest catchment studies played a key role in studying the processes contributing to high water quality. The hydrologic processes investigated on these paired catchments have provided the science base for examining water quality...
High pressure processing and its application to the challenge of virus-contaminated foods.
Kingsley, David H
2013-03-01
High pressure processing (HPP) is an increasingly popular non-thermal food processing technology. Study of HPP's potential to inactivate foodborne viruses has defined general pressure levels required to inactivate hepatitis A virus, norovirus surrogates, and human norovirus itself within foods such as shellfish and produce. The sensitivity of a number of different picornaviruses to HPP is variable. Experiments suggest that HPP inactivates viruses via denaturation of capsid proteins which render the virus incapable of binding to its receptor on the surface of its host cell. Beyond the primary consideration of treatment pressure level, the effects of extending treatment times, temperature of initial pressure application, and matrix composition have been identified as critical parameters for designing HPP inactivation strategies. Research described here can serve as a preliminary guide to whether a current commercial process could be effective against HuNoV or HAV.
van Berkel, Jantien; Boot, Cécile R L; Proper, Karin I; Bongers, Paulien M; van der Beek, Allard J
2013-01-01
To evaluate the process of the implementation of an intervention aimed at improving work engagement and energy balance, and to explore associations between process measures and compliance. Process measures were assessed using a combination of quantitative and qualitative methods. The mindfulness training was attended at least once by 81.3% of subjects, and 54.5% were highly compliant. With regard to e-coaching and homework exercises, 6.3% and 8.0%, respectively, were highly compliant. The training was appreciated with a 7.5 score and e-coaching with a 6.8 score. Appreciation of training and e-coaching, satisfaction with trainer and coach, and practical facilitation were significantly associated with compliance. The intervention was implemented well on the level of the mindfulness training, but poorly on the level of e-coaching and homework time investment. To increase compliance, attention should be paid to satisfaction and trainer-participant relationship.
Taris, Toon W; Feij, Jan A
2004-11-01
The present 3-wave longitudinal study was an examination of job-related learning and strain as a function of job demand and job control. The participants were 311 newcomers to their jobs. On the basis of R. A. Karasek and T. Theorell's (1990) demand-control model, the authors predicted that high demand and high job control would lead to high levels of learning; low demand and low job control should lead to low levels of learning; high demand and low job control should lead to high levels of strain; and low demand and high job control should lead to low levels of strain. The relation between strain and learning was also examined. The authors tested the hypotheses using ANCOVA and structural equation modeling. The results revealed that high levels of strain have an adverse effect on learning; the reverse effect was not confirmed. It appears that Karasek and Theorell's model is very relevant when examining work socialization processes.
ERIC Educational Resources Information Center
Collins, Loel; Collins, Dave
2017-01-01
This article continues a theme of previous investigations by the authors and examines the focus of in-action reflection as a component of professional judgement and decision-making (PJDM) processes in high-level adventure sports coaching. We utilised a thematic analysis approach to investigate the decision-making practices of a sample of…
Adaptation and Age-Related Expectations of Older Gay and Lesbian Adults.
ERIC Educational Resources Information Center
Quam, Jean K.; Whitford, Gary S.
1992-01-01
Respondents in a study of lesbian women and gay men over age 50 who indicated high levels of involvement in the gay community reported acceptance of the aging process and high levels of life satisfaction, despite predictable problems associated with aging and sexual orientation. Being active in the gay community was an asset to accepting one's…