DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamers, M.D.
One of the key needs in the advancement of geothermal energy is availability of adequate subsurface measurements to aid the reservoir engineer in the development and operation of geothermal wells. Some current projects being sponsored by the U. S. Department of Energy's Division of Geothermal Energy pertaining to the development of improved well logging techniques, tools and components are described. An attempt is made to show how these projects contribute to improvement of geothermal logging technology in forming key elements of the overall program goals.
A VLSI architecture for performing finite field arithmetic with reduced table look-up
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Reed, I. S.
1986-01-01
A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.
Automated lithology prediction from PGNAA and other geophysical logs.
Borsaru, M; Zhou, B; Aizawa, T; Karashima, H; Hashimoto, T
2006-02-01
Different methods of lithology predictions from geophysical data have been developed in the last 15 years. The geophysical logs used for predicting lithology are the conventional logs: sonic, neutron-neutron, gamma (total natural-gamma) and density (backscattered gamma-gamma). The prompt gamma neutron activation analysis (PGNAA) is another established geophysical logging technique for in situ element analysis of rocks in boreholes. The work described in this paper was carried out to investigate the application of PGNAA to the lithology interpretation. The data interpretation was conducted using the automatic interpretation program LogTrans based on statistical analysis. Limited test suggests that PGNAA logging data can be used to predict the lithology. A success rate of 73% for lithology prediction was achieved from PGNAA logging data only. It can also be used in conjunction with the conventional geophysical logs to enhance the lithology prediction.
A method of improving sensitivity of carbon/oxygen well logging for low porosity formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Juntao; Zhang, Feng; Zhang, Quanying
Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less
A method of improving sensitivity of carbon/oxygen well logging for low porosity formation
Liu, Juntao; Zhang, Feng; Zhang, Quanying; ...
2016-12-01
Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less
Industrial application of semantic process mining
NASA Astrophysics Data System (ADS)
Espen Ingvaldsen, Jon; Atle Gulla, Jon
2012-05-01
Process mining relates to the extraction of non-trivial and useful information from information system event logs. It is a new research discipline that has evolved significantly since the early work on idealistic process logs. Over the last years, process mining prototypes have incorporated elements from semantics and data mining and targeted visualisation techniques that are more user-friendly to business experts and process owners. In this article, we present a framework for evaluating different aspects of enterprise process flows and address practical challenges of state-of-the-art industrial process mining. We also explore the inherent strengths of the technology for more efficient process optimisation.
Sulfide Melts and Chalcophile Element Behavior in High Temperature Systems
NASA Astrophysics Data System (ADS)
Wood, B. J.; Kiseeva, K.
2016-12-01
We recently found that partition coefficients (Di) of many weakly and moderately chalcophile elements (e.g., Cd, Zn, Co, Cr, Pb, Sb, In) between sulfide and silicate melts are simple functions of the FeO content of the silicate liquid: logDi A-Blog[FeO] where [FeO] is the FeO concentration in the silicate, A and B are constants and the latter is related to the valency of the element of interest. In contrast, some strongly chalcophile (e.g Cu, Ni, Ag) and lithophile elements (e.g Mn) show marked deviations from linearity on a plot of logDi vs log[FeO]. More recent experiments show that linear behavior is confined to elements whose affinities for S and O are similar to those of Fe. In the case of elements more strongly lithophile than Fe (Ti, U, REE, Zr, Nb, Ta, Mn) a plot of logDi versus log[FeO] describes a U-shape with the element partitioning strongly into the sulfide at very low FeO and again at very high FeO content of the silicate melt. In contrast, strongly chalcophile elements (Cu, Ni, Ag) describe an n-shape on the plot of logD vs log[FeO]. The result is that lithophile elements such as Nb become more "chalcophile" than Cu at very low and very high FeO contents of the silicate melt. The reasons for this surprising behavior are firstly that, at very low FeO contents the silicate melt dissolves substantial amounts of sulfur, which drives down the activity of FeO and, from mass-action "pulls" the lihophile element into the sulfide. At high FeO contents of the silicate the sulfide itself starts to dissolve substantial amounts of oxygen and lithophile elements follow the oxygen into the sulfide. Given the principles which we have established, we are able to describe the patterns of chalcophile element behavior during partial melting and fractional crystallisation on Earth and also on bodies such as Mercury and Mars which are, respectively, strongly reduced relative to Earth and more oxidised than Earth.
Senftle, F.E.; Moxham, R.M.; Tanner, A.B.
1972-01-01
The recent availability of borehole logging sondes employing a source of neutrons and a Ge(Li) detector opens up the possibility of analyzing either decay or capture gamma rays. The most efficient method for a given element can be predicted by calculating the decay-to-capture count ratio for the most prominent peaks in the respective spectra. From a practical point of view such a calculation must be slanted toward short irradiation and count times at each station in a borehole. A simplified method of computation is shown, and the decay-to-capture count ratio has been calculated and tabulated for the optimum value in the decay mode irrespective of the irradiation time, and also for a ten minute irradiation time. Based on analysis of a single peak in each spectrum, the results indicate the preferred technique and the best decay or capture peak to observe for those elements of economic interest. ?? 1972.
Collett, T.S.; Wendlandt, R.F.
2000-01-01
The analyses of downhole log data from Ocean Drilling Program (ODP) boreholes on the Blake Ridge at Sites 994, 995, and 997 indicate that the Schlumberger geochemical logging tool (GLT) may yield useful gas hydrate reservoir data. In neutron spectroscopy downhole logging, each element has a characteristic gamma ray that is emitted from a given neutron-element interaction. Specific elements can be identified by their characteristic gamma-ray signature, with the intensity of emission related to the atomic elemental concentration. By combining elemental yields from neutron spectroscopy logs, reservoir parameters including porosities, lithologies, formation fluid salinities, and hydrocarbon saturations (including gas hydrate) can be calculated. Carbon and oxygen elemental data from the GLT was used to determine gas hydrate saturations at all three sites (Sites 994, 995, and 997) drilled on the Blake Ridge during Leg 164. Detailed analyses of the carbon and oxygen content of various sediments and formation fluids were used to construct specialized carbon/oxygen ratio (COR) fan charts for a series of hypothetical gas hydrate accumulations. For more complex geologic systems, a modified version of the standard three-component COR hydrocarbon saturation equation was developed and used to calculate gas hydrate saturations on the Blake Ridge. The COR-calculated gas hydrate saturations (ranging from about 2% to 14% bulk volume gas hydrate) from the Blake Ridge compare favorably to the gas hydrate saturations derived from electrical resistivity log measurements.
Application of Nuclear Well Logging Techniques to Lunar Resource Assessment
NASA Technical Reports Server (NTRS)
Albats, P.; Groves, J.; Schweitzer, J.; Tombrello, T.
1992-01-01
The use of neutron and gamma ray measurements for the analysis of material composition has become well established in the last 40 years. Schlumberger has pioneered the use of this technology for logging wells drilled to produce oil and gas, and for this purpose has developed neutron generators that allow measurements to be made in deep (5000 m) boreholes under adverse conditions. We also make ruggedized neutron and gamma ray detector packages that can be used to make reliable measurements on the drill collar of a rotating drill string while the well is being drilled, where the conditions are severe. Modern nuclear methods used in logging measure rock formation parameters like bulk density and porosity, fluid composition, and element abundances by weight including hydrogen concentration. The measurements are made with high precision and accuracy. These devices (well logging sondes) share many of the design criteria required for remote sensing in space; they must be small, light, rugged, and able to perform reliably under adverse conditions. We see a role for the adaptation of this technology to lunar or planetary resource assessment missions.
Mineral inversion for element capture spectroscopy logging based on optimization theory
NASA Astrophysics Data System (ADS)
Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning
2017-12-01
Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.
Forensic Comparison of Soil Samples Using Nondestructive Elemental Analysis.
Uitdehaag, Stefan; Wiarda, Wim; Donders, Timme; Kuiper, Irene
2017-07-01
Soil can play an important role in forensic cases in linking suspects or objects to a crime scene by comparing samples from the crime scene with samples derived from items. This study uses an adapted ED-XRF analysis (sieving instead of grinding to prevent destruction of microfossils) to produce elemental composition data of 20 elements. Different data processing techniques and statistical distances were evaluated using data from 50 samples and the log-LR cost (C llr ). The best performing combination, Canberra distance, relative data, and square root values, is used to construct a discriminative model. Examples of the spatial resolution of the method in crime scenes are shown for three locations, and sampling strategy is discussed. Twelve test cases were analyzed, and results showed that the method is applicable. The study shows how the combination of an analysis technique, a database, and a discriminative model can be used to compare multiple soil samples quickly. © 2016 American Academy of Forensic Sciences.
Shielding Effectiveness in a Two-Dimensional Reverberation Chamber Using Finite-Element Techniques
NASA Technical Reports Server (NTRS)
Bunting, Charles F.
2006-01-01
Reverberation chambers are attaining an increased importance in determination of electromagnetic susceptibility of avionics equipment. Given the nature of the variable boundary condition, the ability of a given source to couple energy into certain modes and the passband characteristic due the chamber Q, the fields are typically characterized by statistical means. The emphasis of this work is to apply finite-element techniques at cutoff to the analysis of a two-dimensional structure to examine the notion of shielding-effectiveness issues in a reverberating environment. Simulated mechanical stirring will be used to obtain the appropriate statistical field distribution. The shielding effectiveness (SE) in a simulated reverberating environment is compared to measurements in a reverberation chamber. A log-normal distribution for the SE is observed with implications for system designers. The work is intended to provide further refinement in the consideration of SE in a complex electromagnetic environment.
Sample Introduction Using the Hildebrand Grid Nebulizer for Plasma Spectrometry
1988-01-01
linear dynamic ranges, precision, and peak width were de- termined for elements in methanol and acetonitrile solutions. , (1)> The grid nebulizer was...FIA) with ICP-OES detection were evaluated. Detec- tion limits, linear dynamic ranges, precision, and peak width were de- termined for elements in...Concentration vs. Log Peak Area for Mn, 59 Cd, Zn, Au, Ni in Methanol (CMSC) 3-28 Log Concentration vs. Log Peak Area for Mn, 60 Cd, Au, Ni in
Tables for estimating skidder tire wear
Cleveland J. Biller; Cleveland J. Biller
1970-01-01
This book of tables was prepared to help the logging operator estimate how much a particular logging job will wear the tires on his rubber-wheeled skidders. This is an important element in the cost of a logging operation. The logging operator can translate these estimates of tire wear into a production cost.
12. LOG FOUNDATION ELEMENTS OF THE SAWMILL ADJACENT TO THE ...
12. LOG FOUNDATION ELEMENTS OF THE SAWMILL ADJACENT TO THE CANAL, LOOKING EAST. BARREN AREA IN FOREGROUND IS DECOMPOSING SAWDUST. DIRT PILE IN BACKGROUND IS THE EDGE OF THE SUMMIT COUNTY LANDFILL. - Snake River Ditch, Headgate on north bank of Snake River, Dillon, Summit County, CO
Log-Log Convexity of Type-Token Growth in Zipf's Systems
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Corral, Álvaro
2015-06-01
It is traditionally assumed that Zipf's law implies the power-law growth of the number of different elements with the total number of elements in a system—the so-called Heaps' law. We show that a careful definition of Zipf's law leads to the violation of Heaps' law in random systems, with growth curves that have a convex shape in log-log scale. These curves fulfill universal data collapse that only depends on the value of Zipf's exponent. We observe that real books behave very much in the same way as random systems, despite the presence of burstiness in word occurrence. We advance an explanation for this unexpected correspondence.
Hussain, Mahbub; Ahmed, Syed Munaf; Abderrahman, Walid
2008-01-01
A multivariate statistical technique, cluster analysis, was used to assess the logged surface water quality at an irrigation project at Al-Fadhley, Eastern Province, Saudi Arabia. The principal idea behind using the technique was to utilize all available hydrochemical variables in the quality assessment including trace elements and other ions which are not considered in conventional techniques for water quality assessments like Stiff and Piper diagrams. Furthermore, the area belongs to an irrigation project where water contamination associated with the use of fertilizers, insecticides and pesticides is expected. This quality assessment study was carried out on a total of 34 surface/logged water samples. To gain a greater insight in terms of the seasonal variation of water quality, 17 samples were collected from both summer and winter seasons. The collected samples were analyzed for a total of 23 water quality parameters including pH, TDS, conductivity, alkalinity, sulfate, chloride, bicarbonate, nitrate, phosphate, bromide, fluoride, calcium, magnesium, sodium, potassium, arsenic, boron, copper, cobalt, iron, lithium, manganese, molybdenum, nickel, selenium, mercury and zinc. Cluster analysis in both Q and R modes was used. Q-mode analysis resulted in three distinct water types for both the summer and winter seasons. Q-mode analysis also showed the spatial as well as temporal variation in water quality. R-mode cluster analysis led to the conclusion that there are two major sources of contamination for the surface/shallow groundwater in the area: fertilizers, micronutrients, pesticides, and insecticides used in agricultural activities, and non-point natural sources.
Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya
2002-04-01
In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.
NASA Astrophysics Data System (ADS)
Scharnweber, Tobias; van der Maaten, Ernst; Heinrich, Ingo; Buras, Allan; van der Maaten Theunissen, Marieke; Wilmking, Martin
2014-05-01
In contrast to extreme environments with low human impact, where often one specific (climatic) factor is limiting tree growth, dendrochronological research in the temperate zone has to cope with a wide variety of climatic and non-climatic drivers. Sophisticated statistical tools, like various detrending and filtering techniques, allow for a rather precise analysis of high-frequency (annual) climate-growth relationships. However, as almost all forests in the temperate zone are to some degree influenced by human activities, it is difficult to separate anthropogenic from climatic influence on the lower time-frequencies of decades to centuries. Footprints of human activity in time series of tree-ring parameters might be caused directly through forest utilization (logging) or indirectly through environmental changes such as eutrophication or atmospheric pollution. The former can be elucidated by traditional dendrochronological techniques based on ring parameters; evaluation of the latter requires additional proxies such as dendrochemical data. For the interpretation of long-term trends and the calibration of tree-ring based reconstructions it is therefore necessary to study tree growth in as undisturbed forest environments as possible. Comparison with dendrochronological time series from managed forest might then allow separation of climatic- from anthropogenic signals. Here, we present long-term growth trends for the broadleaved tree species common beech, pedunculate oak and sycamore maple, from two protected old-growth forests in northern Germany (one with a documented last logging activity dating back to 1527), and compare those with well-replicated regional chronologies from other, mostly managed forests. Our results indicate that several low frequency trends that can be found in many regional chronologies are likely caused by synchronous periods of heavy loggings as for example during the years following World War II, and do not relate to climatic drivers. In addition, elemental wood composition of trees growing on an island relatively isolated from agricultural depositions or direct atmospheric pollution is compared to elemental concentrations in the wood of trees from a forest surrounded by intensive agriculture in the vicinity of Greifswald, a medium-sized town in Germany. The aim is to detect historical changes in soil chemistry attributable to either atmospheric depositions or groundwater input of nitrogen or sulphur. Therefore, high-resolution (50 µm) X-ray fluorescence (XRF) analysis is carried out and species-specific annual chronologies of relative concentrations of the most abundant elements as well as of different indicative element-ratios are built. We discuss our findings in the light of ongoing soil acidification that might be responsible for some of the detected trends (e.g. decrease in base cations like Ca or Mn), while considering possible radial translocation processes in the wood that might blur the obtained dendrochemical data.
Application of borehole geophysics to water-resources investigations
Keys, W.S.; MacCary, L.M.
1971-01-01
This manual is intended to be a guide for hydrologists using borehole geophysics in ground-water studies. The emphasis is on the application and interpretation of geophysical well logs, and not on the operation of a logger. It describes in detail those logging techniques that have been utilized within the Water Resources Division of the U.S. Geological Survey, and those used in petroleum investigations that have potential application to hydrologic problems. Most of the logs described can be made by commercial logging service companies, and many can be made with small water-well loggers. The general principles of each technique and the rules of log interpretation are the same, regardless of differences in instrumentation. Geophysical well logs can be interpreted to determine the lithology, geometry, resistivity, formation factor, bulk density, porosity, permeability, moisture content, and specific yield of water-bearing rocks, and to define the source, movement, and chemical and physical characteristics of ground water. Numerous examples of logs are used to illustrate applications and interpretation in various ground-water environments. The interrelations between various types of logs are emphasized, and the following aspects are described for each of the important logging techniques: Principles and applications, instrumentation, calibration and standardization, radius of investigation, and extraneous effects.
An analysis technique for testing log grades
Carl A. Newport; William G. O' Regan
1963-01-01
An analytical technique that may be used in evaluating log-grading systems is described. It also provides means of comparing two or more grading systems, or a proposed change with the system from which it was developed. The total volume and computed value of lumber from each sample log are the basic data used.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
...] Logging Operations; Extension of the Office of Management and Budget's (OMB) Approval of Information... Logging Operations (29 CFR 1910.266). DATES: Comments must be submitted (postmarked, sent, or received) by... following elements: Safe work practices, including the use, operation, and maintenance of tools, machines...
Determining the Oxygen Fugacity of Lunar Pyroclastic Glasses Using Vanadium Valence - An Update
NASA Technical Reports Server (NTRS)
Karner, J. M.; Sutton, S. R.; Papike, J. J.; Shearer, C. K.; Jones, J. H.; Newville, M.
2004-01-01
We have been developing an oxygen barometer based on the valence state of V (V(2+), V(3+), V(4+), and V(5+)) in solar system basaltic glasses. The V valence is determined by synchrotron micro x-ray absorption near edge structure (XANES), which uses x-ray absorption associated with core-electronic transitions (absorption edges) to reveal a pre-edge peak whose intensity is directly proportional to the valence state of an element. XANES has advantages over other techniques that determine elemental valence because measurements can be made non-destructively in air and in situ on conventional thin sections at a micrometer spatial resolution with elemental sensitivities of approx. 100 ppm. Recent results show that fO2 values derived from the V valence technique are consistent with fO2 estimates determined by other techniques for materials that crystallized above the IW buffer. The fO2's determined by V valence (IW-3.8 to IW-2) for the lunar pyroclastic glasses, however, are on the order of 1 to 2.8 log units below previous estimates. Furthermore, the calculated fO2's decrease with increasing TiO2 contents from the A17 VLT to the A17 Orange glasses. In order to investigate these results further, we have synthesized lunar green and orange glasses and examined them by XANES.
Fast and Accurate Simulation Technique for Large Irregular Arrays
NASA Astrophysics Data System (ADS)
Bui-Van, Ha; Abraham, Jens; Arts, Michel; Gueuning, Quentin; Raucy, Christopher; Gonzalez-Ovejero, David; de Lera Acedo, Eloy; Craeye, Christophe
2018-04-01
A fast full-wave simulation technique is presented for the analysis of large irregular planar arrays of identical 3-D metallic antennas. The solution method relies on the Macro Basis Functions (MBF) approach and an interpolatory technique to compute the interactions between MBFs. The Harmonic-polynomial (HARP) model is established for the near-field interactions in a modified system of coordinates. For extremely large arrays made of complex antennas, two approaches assuming a limited radius of influence for mutual coupling are considered: one is based on a sparse-matrix LU decomposition and the other one on a tessellation of the array in the form of overlapping sub-arrays. The computation of all embedded element patterns is sped up with the help of the non-uniform FFT algorithm. Extensive validations are shown for arrays of log-periodic antennas envisaged for the low-frequency SKA (Square Kilometer Array) radio-telescope. The analysis of SKA stations with such a large number of elements has not been treated yet in the literature. Validations include comparison with results obtained with commercial software and with experiments. The proposed method is particularly well suited to array synthesis, in which several orders of magnitude can be saved in terms of computation time.
NASA Astrophysics Data System (ADS)
Esponda, M.; Piraino, F.; Stanga, C.; Mezzino, D.
2017-08-01
This paper presents an integrated approach between digital documentation workflows and historical research in order to document log houses, outstanding example of vernacular architecture in Quebec, focusing on their geometrical-dimensional as well as on the intangible elements associated with these historical structures. The 18 log houses selected in the Laurentians represent the material culture of how settlers adapted to the harsh Quebec environment at the end of the nineteenth century. The essay describes some results coming by professor Mariana Esponda in 2015 (Carleton University) and the digital documentation was carried out through the grant New Paradigm/New Tools for Architectural Heritage in Canada, supported by SSHRC Training Program) (May-August 2016). The workflow of the research started with the digital documentation, accomplished with laser scanning techniques, followed by onsite observations, and archival researches. This led to the creation of an 'abacus', a first step into the development of a territorialhistorical database of the log houses, potentially updatable by other researchers. Another important part of the documentation of these buildings has been the development of Historic Building Information Models fundamental to analyze the geometry of the logs and to understand how these constructions were built. The realization of HBIMs was a first step into the modeling of irregular shapes such as those of the logs - different Level of Detail were adopted in order to show how the models can be used for different purposes. In the future, they can potentially be used for the creation of a virtual tour app for the story telling of these buildings.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Logging debris matters: better soil, fewer invasive plants
John Kirkland; Timoth B. Harrington; David H. Peter; Robert A. Slesak; Stephen H. Schoenholtz
2012-01-01
The logging debris that remains after timber harvest traditionally has been seen as a nuisance. It can make subsequent tree planting more difficult and become fuel for wildfire. It is commonly piled, burned, or taken off site. Logging debris, however, contains significant amounts of carbon and nitrogenâelements critical to soil productivity. Its physical presence in...
Log production in Washington and Oregon: an historical perspective.
Brian R. Wall
1972-01-01
In the history of the Pacific Northwest, log production and conversion have been major economic activities. The long-term trends in timber harvesting have been upward, and most of the harvest has come from large old-growth forest inventories. National and international demands for timber have been a major element in putting upward pressure on log production levels....
The Spontaneous Ray Log: A New Aid for Constructing Pseudo-Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Quadir, Adnan; Lewis, Charles; Rau, Ruey-Juin
2018-02-01
Conventional synthetic seismograms for hydrocarbon exploration combine the sonic and density logs, whereas pseudo-synthetic seismograms are constructed with a density log plus a resistivity, neutron, gamma ray, or rarely a spontaneous potential log. Herein, we introduce a new technique for constructing a pseudo-synthetic seismogram by combining the gamma ray (GR) and self-potential (SP) logs to produce the spontaneous ray (SR) log. Three wells, each of which consisted of more than 1000 m of carbonates, sandstones, and shales, were investigated; each well was divided into 12 Groups based on formation tops, and the Pearson product-moment correlation coefficient (PCC) was calculated for each "Group" from each of the GR, SP, and SR logs. The highest PCC-valued log curves for each Group were then combined to produce a single log whose values were cross-plotted against the reference well's sonic ITT values to determine a linear transform for producing a pseudo-sonic (PS) log and, ultimately, a pseudo-synthetic seismogram. The range for the Nash-Sutcliffe efficiency (NSE) acceptable value for the pseudo-sonic logs of three wells was 78-83%. This technique was tested on three wells, one of which was used as a blind test well, with satisfactory results. The PCC value between the composite PS (SR) log with low-density correction and the conventional sonic (CS) log was 86%. Because of the common occurrence of spontaneous potential and gamma ray logs in many of the hydrocarbon basins of the world, this inexpensive and straightforward technique could hold significant promise in areas that are in need of alternate ways to create pseudo-synthetic seismograms for seismic reflection interpretation.
NASA Astrophysics Data System (ADS)
van der Linden, Joost H.; Narsilio, Guillermo A.; Tordesillas, Antoinette
2016-08-01
We present a data-driven framework to study the relationship between fluid flow at the macroscale and the internal pore structure, across the micro- and mesoscales, in porous, granular media. Sphere packings with varying particle size distribution and confining pressure are generated using the discrete element method. For each sample, a finite element analysis of the fluid flow is performed to compute the permeability. We construct a pore network and a particle contact network to quantify the connectivity of the pores and particles across the mesoscopic spatial scales. Machine learning techniques for feature selection are employed to identify sets of microstructural properties and multiscale complex network features that optimally characterize permeability. We find a linear correlation (in log-log scale) between permeability and the average closeness centrality of the weighted pore network. With the pore network links weighted by the local conductance, the average closeness centrality represents a multiscale measure of efficiency of flow through the pore network in terms of the mean geodesic distance (or shortest path) between all pore bodies in the pore network. Specifically, this study objectively quantifies a hypothesized link between high permeability and efficient shortest paths that thread through relatively large pore bodies connected to each other by high conductance pore throats, embodying connectivity and pore structure.
Lawler, J. E.; Sneden, C.; Nave, G.; Den Hartog, E. A.; Emrahođlu, N.; Cowan, J. J.
2017-01-01
New emission branching fraction (BF) measurements for 183 lines of the second spectrum of chromium (Cr II) and new radiative lifetime measurements from laser-induced fluorescence for 8 levels of Cr+ are reported. The goals of this study are to improve transition probability measurements in Cr II and reconcile solar and stellar Cr abundance values based on Cr I and Cr II lines. Eighteen spectra from three Fourier Transform Spectrometers supplemented with ultraviolet spectra from a high-resolution echelle spectrometer are used in the BF measurements. Radiative lifetimes from this study and earlier publications are used to convert the BFs into absolute transition probabilities. These new laboratory data are applied to determine the Cr abundance log ε in the Sun and metal-poor star HD 84937. The mean result in the Sun is 〈logε (Cr II)〉 = 5.624±0.009 compared to 〈logε(Cr I)〉 = 5.644 ± 0.006 on a scale with the hydrogen abundance log ε(H) = 12 and with the uncertainty representing only line-to-line scatter. A Saha (ionization balance) test on the photosphere of HD 84937 is also performed, yielding 〈logε(Cr II)〉 = 3.417 ± 0.006 and 〈log ε(Cr I, lower level excitation potential E. P. >30 eV)〉 = 3.3743±30.011 for this dwarf star. We find a correlation of Cr with the iron-peak element Ti, suggesting an associated nucleosynthetic production. Four iron-peak elements (Cr along with Ti, V, and Sc) appear to have a similar (or correlated) production history—other iron-peak elements appear not to be associated with Cr. PMID:28579650
Lawler, J E; Sneden, C; Nave, G; Den Hartog, E A; Emrahođlu, N; Cowan, J J
2017-01-01
New emission branching fraction (BF) measurements for 183 lines of the second spectrum of chromium (Cr II) and new radiative lifetime measurements from laser-induced fluorescence for 8 levels of Cr + are reported. The goals of this study are to improve transition probability measurements in Cr II and reconcile solar and stellar Cr abundance values based on Cr I and Cr II lines. Eighteen spectra from three Fourier Transform Spectrometers supplemented with ultraviolet spectra from a high-resolution echelle spectrometer are used in the BF measurements. Radiative lifetimes from this study and earlier publications are used to convert the BFs into absolute transition probabilities. These new laboratory data are applied to determine the Cr abundance log ε in the Sun and metal-poor star HD 84937. The mean result in the Sun is 〈log ε (Cr II)〉 = 5.624±0.009 compared to 〈log ε (Cr I)〉 = 5.644 ± 0.006 on a scale with the hydrogen abundance log ε (H) = 12 and with the uncertainty representing only line-to-line scatter. A Saha (ionization balance) test on the photosphere of HD 84937 is also performed, yielding 〈log ε (Cr II)〉 = 3.417 ± 0.006 and 〈log ε (Cr I, lower level excitation potential E. P. >30 eV)〉 = 3.3743±30.011 for this dwarf star. We find a correlation of Cr with the iron-peak element Ti, suggesting an associated nucleosynthetic production. Four iron-peak elements (Cr along with Ti, V, and Sc) appear to have a similar (or correlated) production history-other iron-peak elements appear not to be associated with Cr.
NASA Astrophysics Data System (ADS)
El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.
2018-06-01
An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.
Parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Amin-Javaheri, Masoud; Orin, David E.
1989-01-01
The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.
Liu, Juntao; Zhang, Feng; Wang, Xinguang; Han, Fei; Yuan, Zhelong
2014-12-01
Formation porosity can be determined using the boron capture gamma ray counting ratio with a near to far detector in a pulsed neutron-gamma element logging tool. The thermal neutron distribution, boron capture gamma spectroscopy and porosity response for formations with different water salinity and wellbore diameter characteristics were simulated using the Monte Carlo method. We found that a boron lining improves the signal-to-noise ratio and that the boron capture gamma ray counting ratio has a higher sensitivity for determining porosity than total capture gamma. Copyright © 2014 Elsevier Ltd. All rights reserved.
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
A new method for correlation analysis of compositional (environmental) data - a worked example.
Reimann, C; Filzmoser, P; Hron, K; Kynčlová, P; Garrett, R G
2017-12-31
Most data in environmental sciences and geochemistry are compositional. Already the unit used to report the data (e.g., μg/l, mg/kg, wt%) implies that the analytical results for each element are not free to vary independently of the other measured variables. This is often neglected in statistical analysis, where a simple log-transformation of the single variables is insufficient to put the data into an acceptable geometry. This is also important for bivariate data analysis and for correlation analysis, for which the data need to be appropriately log-ratio transformed. A new approach based on the isometric log-ratio (ilr) transformation, leading to so-called symmetric coordinates, is presented here. Summarizing the correlations in a heat-map gives a powerful tool for bivariate data analysis. Here an application of the new method using a data set from a regional geochemical mapping project based on soil O and C horizon samples is demonstrated. Differences to 'classical' correlation analysis based on log-transformed data are highlighted. The fact that some expected strong positive correlations appear and remain unchanged even following a log-ratio transformation has probably led to the misconception that the special nature of compositional data can be ignored when working with trace elements. The example dataset is employed to demonstrate that using 'classical' correlation analysis and plotting XY diagrams, scatterplots, based on the original or simply log-transformed data can easily lead to severe misinterpretations of the relationships between elements. Copyright © 2017 Elsevier B.V. All rights reserved.
Petrophysical evaluation of subterranean formations
Klein, James D; Schoderbek, David A; Mailloux, Jason M
2013-05-28
Methods and systems are provided for evaluating petrophysical properties of subterranean formations and comprehensively evaluating hydrate presence through a combination of computer-implemented log modeling and analysis. Certain embodiments include the steps of running a number of logging tools in a wellbore to obtain a variety of wellbore data and logs, and evaluating and modeling the log data to ascertain various petrophysical properties. Examples of suitable logging techniques that may be used in combination with the present invention include, but are not limited to, sonic logs, electrical resistivity logs, gamma ray logs, neutron porosity logs, density logs, NRM logs, or any combination or subset thereof.
Borehole geophysics applied to ground-water investigations
Keys, W.S.
1990-01-01
The purpose of this manual is to provide hydrologists, geologists, and others who have the necessary background in hydrogeology with the basic information needed to apply the most useful borehole-geophysical-logging techniques to the solution of problems in ground-water hydrology. Geophysical logs can provide information on the construction of wells and on the character of the rocks and fluids penetrated by those wells, as well as on changes in the character of these factors over time. The response of well logs is caused by petrophysical factors, by the quality, temperature, and pressure of interstitial fluids, and by ground-water flow. Qualitative and quantitative analysis of analog records and computer analysis of digitized logs are used to derive geohydrologic information. This information can then be extrapolated vertically within a well and laterally to other wells using logs. The physical principles by which the mechanical and electronic components of a logging system measure properties of rocks, fluids, and wells, as well as the principles of measurement, must be understood if geophysical logs are to be interpreted correctly. Plating a logging operation involves selecting the equipment and the logs most likely to provide the needed information. Information on well construction and geohydrology is needed to guide this selection. Quality control of logs is an important responsibility of both the equipment operator and the log analyst and requires both calibration and well-site standardization of equipment. Logging techniques that are widely used in ground-water hydrology or that have significant potential for application to this field include spontaneous potential, resistance, resistivity, gamma, gamma spectrometry, gamma-gamma, neutron, acoustic velocity, acoustic televiewer, caliper, and fluid temperature, conductivity, and flow. The following topics are discussed for each of these techniques: principles and instrumentation, calibration and standardization, volume of investigation, extraneous effects, and interpretation and applications.
Borehole geophysics applied to ground-water investigations
Keys, W.S.
1988-01-01
The purpose of this manual is to provide hydrologists, geologists, and others who have the necessary training with the basic information needed to apply the most useful borehole-geophysical-logging techniques to the solution of problems in ground-water hydrology. Geophysical logs can provide information on the construction of wells and on the character of the rocks and fluids penetrated by those wells, in addition to changes in the character of these factors with time. The response of well logs is caused by: petrophysical factors; the quality; temperature, and pressure of interstitial fluids; and ground-water flow. Qualitative and quantitative analysis of the analog records and computer analysis of digitized logs are used to derive geohydrologic information. This information can then be extrapolated vertically within a well and laterally to other wells using logs.The physical principles by which the mechanical and electronic components of a logging system measure properties of rocks, fluids and wells, and the principles of measurement need to be understood to correctly interpret geophysical logs. Planning the logging operation involves selecting the equipment and the logs most likely to provide the needed information. Information on well construction and geohydrology are needed to guide this selection. Quality control of logs is an important responsibility of both the equipment operator and log analyst and requires both calibration and well-site standardization of equipment.Logging techniques that are widely used in ground-water hydrology or that have significant potential for application to this field include: spontaneous potential, resistance, resistivity, gamma, gamma spectrometry, gamma-gamma, neutron, acoustic velocity, acoustic televiewer, caliper, and fluid temperature, conductivity, and flow. The following topics are discussed for each of these techniques: principles and instrumentation, calibration and standardization, volume of investigation, extraneous effects, and interpretation and applications.
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2017-08-01
An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.
In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...
Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.
W.Y. Pong; Dale R. Waddell; Lambert Michael B.
1986-01-01
Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...
The goal of this volume is to compare and assess various techniques for understanding fracture patterns at a site at Pease International Tradeport, NH, and to give an overview of the site as a whole. Techniques included are: core logging, geophysical logging, radar studies, and...
Logging while fishing: An alternate method to cut and thread fishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tollefsen, E.; Crary, S.; Flores, B.
1996-12-31
New technology has been introduced to allow completion of the wireline logging program after the tool string has become lodged in the wellbore. Charges associated with extracting a stuck tool are substantial. These charges result from the nonproductive time during the fishing trip, an associated wiper trip, and re-logging the well. The ability to continue the logging program while retrieving the logging string from the wellbore is needed. Logging While Fishing (LWF) is a hybrid of existing technologies combined with a new sub capable of severing a cable remotely. This new method is comprised of cut and thread fishing, drillpipemore » conveyed logging, and bridled tool techniques. Utilizing these techniques it is possible to complete wireline logging operations while removing a stuck tool from the wellbore. Completing logging operations using this hybrid method will save operating companies time and money. Other benefits, depending on the situation, include reduced fishing time and an increased level of safety. This application has been demonstrated on jobs in the Gulf of Mexico, North Sea, Venezuela, and Southeast Asia.« less
Reduction of lithologic-log data to numbers for use in the digital computer
Morgan, C.O.; McNellis, J.M.
1971-01-01
The development of a standardized system for conveniently coding lithologic-log data for use in the digital computer has long been needed. The technique suggested involves a reduction of the original written alphanumeric log to a numeric log by use of computer programs. This numeric log can then be retrieved as a written log, interrogated for pertinent information, or analyzed statistically. ?? 1971 Plenum Publishing Corporation.
Robert H. Hilderbrand; A. Dennis Lemly; C. Andrew Dolloff; Kelly L. Harpster
1998-01-01
Log length exerted a critical influence in stabilizing large woody debris (LWD) pieces added as an experimental stream restoration technique. Logs longer than the average bank-full channel width (5.5 m) were significantly less likely to be displaced than logs shorter than this width. The longest log in stable log groups was significantly longer than the longest log in...
Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.
2011-01-01
Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less
Balloon logging with the inverted skyline
NASA Technical Reports Server (NTRS)
Mosher, C. F.
1975-01-01
There is a gap in aerial logging techniques that has to be filled. The need for a simple, safe, sizeable system has to be developed before aerial logging will become effective and accepted in the logging industry. This paper presents such a system designed on simple principles with realistic cost and ecological benefits.
Logging while fishing technique results in substantial savings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tollefsen, E.; Everett, M.
1996-12-01
During wireline logging operations, tools occasionally become stuck in the borehole and require fishing. A typical fishing job can take anywhere from 1{1/2}--4 days. In the Gulf of Mexico, a fishing job can easily cost between $100,000 and $500,000. These costs result from nonproductive time during the fishing trip, associated wiper trip and relogging the well. Logging while fishing (LWF) technology is a patented system capable of retrieving a stuck fish and completing the logging run during the same pipe descent. Completing logging operations using LWF method saves time and money. The technique also provides well information where data maymore » not otherwise have been obtained. Other benefits include reduced fishing time and an increased level of safety.« less
Chalcophile element partitioning in highly oxidised and highly reduced bodies.
NASA Astrophysics Data System (ADS)
Kiseeva, K.; Wood, B. J.
2015-12-01
In our recent studies [1-3] we showed that partitioning of many chalcophile elements could be described by a simple relationship as a function of the FeO content of the silicate liquid. LogDi ~= A-0.5nlog[FeO] where A is a constant, n is the constant related to the valency of element i and [FeO] is the concentration of FeO in the silicate melt. For many chalcophile and moderately chalcophile elements (e.g., Zn, Cr, Pb, Sb, In), the fitted slope n depends only on the valency of the element. More lithophile elements (e.g., Ti, Nb, Ce, Ga) exhibit concave upwards behavior on a plot of logD versus log[FeO] due to their strong interaction with oxygen in sulphide, which increases with the increasing FeO content of the silicate liquid. Strongly chalcophile elements, like Cu, Ag and Ni have the opposite trend (concave downwards) and their D decreases both at high (> 10-12wt %) and very low (< 1wt%) FeO contents of the silicate melt. These changes correlate with increasing S content of the silicate melt (up to 11 wt%) as the FeO content of the silicate melt declines to ~0.3wt%. An experiment at 1.5 GPa/1420oC having 4 wt% S and 0.28 wt% FeO in the silicate melt has DCu (sulf/sil) ~ 84, which is about 6 times lower than the DCu(sulf/sil) at identical p-T conditions but at 8 wt% FeO in the silicate melt. Our new experimental data on Re partitioning between sulphide and silicate melt in the CMAS+FeO system show that Re behaves similarly to the highly chalcophile elements and exhibits concave downwards behaviour on the LogD/LogFeO diagram. With the highest DRe (sulf/sil) at around 1.5-2.0x104 at 1.5-6.0 wt% FeO in the silicate melt, DRe (sulf/sil) declines to the values of 50-150 at ~0.5 wt% and > ~15 wt% FeO in the silicate melt, respectively. This means that at highly reducing conditions Re is similarly or less chalcophile than some of the highly lithophile elements, like Ta (D ≈ 9), Nb (D ≈ 600), Ti (D ≈ 6) [3]. The results mean that in oxidised bodies like Mars and reduced bodies like Mercury, most "lithophile" elements partition more strongly into sulphide than Re and Cu. [1] Kiseeva E. S., Wood B. J. (2013). EPSL 383, p. 68-81. [2] Kiseeva E. S., Wood B. J. (2015). EPSL 424, p. 280-294. [3] Wood B. J., Kiseeva E. S. (2015). AmMin (in press).
Treatment of singularities in a middle-crack tension specimen
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1990-01-01
A three-dimensional finite-element analysis of a middle-crack tension specimen subjected to mode I loading was performed to study the stress singularity along the crack front. The specimen was modeled using 20-node isoparametric elements with collapsed nonsingular elements at the crack front. The displacements and stresses from the analysis were used to estimate the power of singularities, by a log-log regression analysis, along the crack front. Analyses showed that finite-sized cracked bodies have two singular stress fields. Because of two singular stress fields near the free surface and the classical square root singularity elsewhere, the strain energy release rate appears to be an appropriate parameter all along the crack front.
Linear-log counting-rate meter uses transconductance characteristics of a silicon planar transistor
NASA Technical Reports Server (NTRS)
Eichholz, J. J.
1969-01-01
Counting rate meter compresses a wide range of data values, or decades of current. Silicon planar transistor, operating in the zero collector-base voltage mode, is used as a feedback element in an operational amplifier to obtain the log response.
Sá, Rui Carlos; Henderson, A Cortney; Simonson, Tatum; Arai, Tatsuya J; Wagner, Harrieth; Theilmann, Rebecca J; Wagner, Peter D; Prisk, G Kim; Hopkins, Susan R
2017-07-01
We have developed a novel functional proton magnetic resonance imaging (MRI) technique to measure regional ventilation-perfusion (V̇ A /Q̇) ratio in the lung. We conducted a comparison study of this technique in healthy subjects ( n = 7, age = 42 ± 16 yr, Forced expiratory volume in 1 s = 94% predicted), by comparing data measured using MRI to that obtained from the multiple inert gas elimination technique (MIGET). Regional ventilation measured in a sagittal lung slice using Specific Ventilation Imaging was combined with proton density measured using a fast gradient-echo sequence to calculate regional alveolar ventilation, registered with perfusion images acquired using arterial spin labeling, and divided on a voxel-by-voxel basis to obtain regional V̇ A /Q̇ ratio. LogSDV̇ and LogSDQ̇, measures of heterogeneity derived from the standard deviation (log scale) of the ventilation and perfusion vs. V̇ A /Q̇ ratio histograms respectively, were calculated. On a separate day, subjects underwent study with MIGET and LogSDV̇ and LogSDQ̇ were calculated from MIGET data using the 50-compartment model. MIGET LogSDV̇ and LogSDQ̇ were normal in all subjects. LogSDQ̇ was highly correlated between MRI and MIGET (R = 0.89, P = 0.007); the intercept was not significantly different from zero (-0.062, P = 0.65) and the slope did not significantly differ from identity (1.29, P = 0.34). MIGET and MRI measures of LogSDV̇ were well correlated (R = 0.83, P = 0.02); the intercept differed from zero (0.20, P = 0.04) and the slope deviated from the line of identity (0.52, P = 0.01). We conclude that in normal subjects, there is a reasonable agreement between MIGET measures of heterogeneity and those from proton MRI measured in a single slice of lung. NEW & NOTEWORTHY We report a comparison of a new proton MRI technique to measure regional V̇ A /Q̇ ratio against the multiple inert gas elimination technique (MIGET). The study reports good relationships between measures of heterogeneity derived from MIGET and those derived from MRI. Although currently limited to a single slice acquisition, these data suggest that single sagittal slice measures of V̇ A /Q̇ ratio provide an adequate means to assess heterogeneity in the normal lung. Copyright © 2017 the American Physiological Society.
The design and implementation of web mining in web sites security
NASA Astrophysics Data System (ADS)
Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li
2003-06-01
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
Locating knots by industrial tomography- A feasibility study
Fred W. Taylor; Francis G. Wagner; Charles W. McMillin; Ira L. Morgan; Forrest F. Hopkins
1984-01-01
Industrial photon tomography was used to scan four southern pine logs and one red oak log. The logs were scanned at 16 cross-sectional slice planes located 1 centimeter apart along their longitudinal axes. Tomographic reconstructions were made from the scan data collected at these slice planes, and a cursory image analysis technique was developed to locate the log...
Paul E. Aho; Gary Fiddler; Mike. Srago
1983-01-01
Logging-damage surveys and tree-dissection studies were made in commercially thinned, naturally established young-growth true fir stands in the Lassen National Forest in northern California. Significant damage occurred to residual trees in stands logged by conventional methods. Logging damage was substantially lower in stands thinned using techniques designed to reduce...
Analysis of Hospital Processes with Process Mining Techniques.
Orellana García, Arturo; Pérez Alfonso, Damián; Larrea Armenteros, Osvaldo Ulises
2015-01-01
Process mining allows for discovery, monitoring, and improving processes identified in information systems from their event logs. In hospital environments, process analysis has been a crucial factor for cost reduction, control and proper use of resources, better patient care, and achieving service excellence. This paper presents a new component for event logs generation in the Hospital Information System or HIS, developed at University of Informatics Sciences. The event logs obtained are used for analysis of hospital processes with process mining techniques. The proposed solution intends to achieve the generation of event logs in the system with high quality. The performed analyses allowed for redefining functions in the system and proposed proper flow of information. The study exposed the need to incorporate process mining techniques in hospital systems to analyze the processes execution. Moreover, we illustrate its application for making clinical and administrative decisions for the management of hospital activities.
Bio-logging of physiological parameters in higher marine vertebrates
NASA Astrophysics Data System (ADS)
Ponganis, Paul J.
2007-02-01
Bio-logging of physiological parameters in higher marine vertebrates had its origins in the field of bio-telemetry in the 1960s and 1970s. The development of microprocessor technology allowed its first application to bio-logging investigations of Weddell seal diving physiology in the early 1980s. Since that time, with the use of increased memory capacity, new sensor technology, and novel data processing techniques, investigators have examined heart rate, temperature, swim speed, stroke frequency, stomach function (gastric pH and motility), heat flux, muscle oxygenation, respiratory rate, diving air volume, and oxygen partial pressure (P) during diving. Swim speed, heart rate, and body temperature have been the most commonly studied parameters. Bio-logging investigation of pressure effects has only been conducted with the use of blood samplers and nitrogen analyses on animals diving at isolated dive holes. The advantages/disadvantages and limitations of recording techniques, probe placement, calibration techniques, and study conditions are reviewed.
NASA Astrophysics Data System (ADS)
Mukherjee, Bappa; Roy, P. N. S.
The identification of prospective and dry zone is of major importance from well log data. Truthfulness in the identification of potential zone is a very crucial issue in hydrocarbon exploration. In this line, the problem has received considerable attention and many conventional techniques have been proposed. The purpose of this study is to recognize the hydrocarbon and non-hydrocarbon bearing portion within a reservoir by using the non-conventional technique. The wavelet based fractal analysis (WBFA) has been applied on the wire-line log data in order to obtain the pre-defined hydrocarbon (HC) and non-hydrocarbon (NHC) zones by their self-affine signal nature is demonstrated in this paper. The feasibility of the proposed technique is tested with the help of most commonly used logs, like self-potential, gamma ray, resistivity and porosity log responses. These logs are obtained from the industry to make out several HC and NHC zones of all wells in the study region belonging to the upper Assam basin. The results obtained in this study for a particular log response, where in the case of HC bearing zones, it is found that they are mainly situated in a variety of sandstones lithology which leads to the higher Hurst exponent. Further, the NHC zones found to be analogous to lithology with higher shale content having lower Hurst exponent. The above proposed technique can overcome the chance of miss interpretation in conventional reservoir characterization.
NASA Astrophysics Data System (ADS)
Thakur, Punam; Xiong, Yongliang; Borkowski, Marian; Choppin, Gregory R.
2014-05-01
The dissociation constants of ethylenediaminetetraacetic acid (H4EDTA), and the stability constants of Am3+, Cm3+and Eu3+ with EDTA4- have been determined at 25 °C, over a range of concentration varying from 0.1 to 6.60 m NaClO4 using potentiometric titration and an extraction technique, respectively. The formation of only 1:1 complex, M(EDTA)-, where (M = Am3+, Cm3+ and Eu3+), was observed under the experimental conditions. The observed ionic strength dependencies of the dissociation constants and the stability constants have been described successfully over the entire ionic strength range using the Pitzer model. The thermodynamic stability constant: logβ1010=20.55±0.18 for Am3+, logβ1010=20.43±0.20 for Cm3+ and logβ1010=20.65±0.19 for Eu3+ were calculated by extrapolation of data to zero ionic strength in an NaClO4 medium. In addition, logβ1010 of 20.05 ± 0.40 for Am3+ was obtained by simultaneously modeling data both in NaCl and NaClO4 media. For all stability constants, the Pitzer model gives an excellent representation of the data using interaction parameters β(0), β(1), and Cϕ determined in this work. The improved model presented in this work would enable researchers to model accurately the potential mobility of actinides (III) and light rare earth elements to ionic strength of 6.60 m in low temperature environments in the presence of EDTA.
Development of a 3D log sawing optimization system for small sawmills in central Appalachia, US
Wenshu Lin; Jingxin Wang; Edward Thomas
2011-01-01
A 3D log sawing optimization system was developed to perform log generation, opening face determination, sawing simulation, and lumber grading using 3D modeling techniques. Heuristic and dynamic programming algorithms were used to determine opening face and grade sawing optimization. Positions and shapes of internal log defects were predicted using a model developed by...
Evaluation of residual oil saturation after waterflood in a carbonate reservoir
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verma, M.K.; Boucherit, M.; Bouvier, L.
Four different approaches, including special core analysis (SCAL), log-inject-log, thermal-decay-time (TDT) logs, and material balance, were used to narrow the range of residual oil saturation (ROS) after waterflood, S[sub orw], in a carbonate reservoir in Qatar to between 23% and 27%. An equation was developed that relates S[sub orw] with connate-water saturation, S[sub wi], and porosity. This paper presents the results of S[sub orw] determinations with four different techniques: core waterflood followed by centrifuging, log-inject-log, TDT logging, and material balance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deo, M.D.; Morgan, C.D.
1999-04-28
The objective of the project is to increase oil production and reserves by the use of improved reservoir characterization and completion techniques in the Uinta Basin, Utah. To accomplish this objective, a two-year geologic and engineering characterization of the Bluebell field was conducted. The study evaluated surface and subsurface data, currently used completion techniques, and common production problems. It was determined that advanced case- and open-hole logs could be effective in determining productive beds and that stage-interval (about 500 ft [150 m] per stage) and bed-scale isolation completion techniques could result in improved well performance. In the first demonstration wellmore » (Michelle Ute well discussed in the previous technical report), dipole shear anisotropy (anisotropy) and dual-burst thermal decay time (TDT) logs were run before and isotope tracer log was run after the treatment. The logs were very helpful in characterizing the remaining hydrocarbon potential in the well. But, mechanical failure resulted in a poor recompletion and did not result in a significant improvement in the oil production from the well.« less
Technique for Solving Electrically Small to Large Structures for Broadband Applications
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram; Chowdhury, Indranil
2011-01-01
Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.
NASA Astrophysics Data System (ADS)
Lawler, James E.; Sneden, Chris; Nave, Gillian; Den Hartog, Elizabeth; Emrahoglu, Nuri; Cowan, John J.
2017-01-01
New laser induced fluorescence (LIF) data for eight levels of singly ionized chromium (Cr) and emission branching fraction (BF) measurements for 183 lines of the second spectrum of chromium (Cr II) are reported. A goal of this study is to reconcile Solar and stellar Cr abundance values based on Cr I and Cr II lines. Analyses of eighteen spectra from three Fourier Transform Spectrometers supplemented with ultraviolet spectra from a high resolution echelle spectrometer yield the BF measurements. Radiative lifetimes from LIF measurements are used to convert the BFs to absolute transition probabilities. These new laboratory data are applied to determine the Cr abundance log eps in the Sun and metal-poor star HD 84937. The mean result in the Sun is
Reduced-impact logging: challenges and opportunities
F.E. Putz; P. Sist; T. Fredericksen; D. Dykstra
2008-01-01
Over the past two decades, sets of timber harvesting guidelines designed to mitigate the deleterious environmental impacts of tree felling, yarding, and hauling have become known as "reduced-impact logging" (RIL) techniques. Although none of the components of RIL are new, concerns about destructive logging practices and worker safety in the tropics stimulated...
NASA Astrophysics Data System (ADS)
Benjumea, Beatriz; Macau, Albert; Gabàs, Anna; Figueras, Sara
2016-04-01
We combine geophysical well logging and passive seismic measurements to characterize the near-surface geology of an area located in Hontomin, Burgos (Spain). This area has some near-surface challenges for a geophysical study. The irregular topography is characterized by limestone outcrops and unconsolidated sediments areas. Additionally, the near-surface geology includes an upper layer of pure limestones overlying marly limestones and marls (Upper Cretaceous). These materials lie on top of Low Cretaceous siliciclastic sediments (sandstones, clays, gravels). In any case, a layer with reduced velocity is expected. The geophysical data sets used in this study include sonic and gamma-ray logs at two boreholes and passive seismic measurements: three arrays and 224 seismic stations for applying the horizontal-to-vertical amplitude spectra ratio method (H/V). Well-logging data define two significant changes in the P-wave-velocity log within the Upper Cretaceous layer and one more at the Upper to Lower Cretaceous contact. This technique has also been used for refining the geological interpretation. The passive seismic measurements provide a map of sediment thickness with a maximum of around 40 m and shear-wave velocity profiles from the array technique. A comparison between seismic velocity coming from well logging and array measurements defines the resolution limits of the passive seismic techniques and helps it to be interpreted. This study shows how these low-cost techniques can provide useful information about near-surface complexity that could be used for designing a geophysical field survey or for seismic processing steps such as statics or imaging.
NASA Astrophysics Data System (ADS)
Hönes, Katharina; Stangl, Felix; Sift, Michael; Hessling, Martin
2015-07-01
The Ulm University of Applied Sciences is investigating a technique using visible optical radiation (405 nm and 460 nm) to inactivate health-hazardous bacteria in water. A conceivable application could be point-of-use disinfection implementations in developing countries for safe drinking water supply. Another possible application field could be to provide sterile water in medical institutions like hospitals or dental surgeries where contaminated pipework or long-term disuse often results in higher germ concentrations. Optical radiation for disinfection is presently mostly used in UV wavelength ranges but the possibility of bacterial inactivation with visible light was so far generally disregarded. One of the advantages of visible light is, that instead of mercury arc lamps, light emitting diodes could be used, which are commercially available and therefore cost-efficient concerning the visible light spectrum. Furthermore they inherit a considerable longer life span than UV-C LEDs and are non-hazardous in contrast to mercury arc lamps. Above all there are specific germs, like Bacillus subtilis, which show an inactivation resistance to UV-C wavelengths. Due to the totally different deactivation mechanism even higher disinfection rates are reached, compared to Escherichia coli as a standard laboratory germ. By 460 nm a reduction of three log-levels appeared with Bacillus subtilis and a half log-level with Escherichia coli both at a dose of about 300 J/cm². By the more efficient wavelength of 405 nm four and a half log-levels are reached with Bacillus subtilis and one and a half log-level with Escherichia coli also both at a dose of about 300 J/cm². In addition the employed optical setup, which delivered a homogeneous illumination and skirts the need of a stirring technique to compensate irregularities, was an important improvement compared to previous published setups. Evaluated by optical simulation in ZEMAX® the designed optical element provided proven homogeneity distributions with maximum variation of ± 10 %.
Impact of logging on aboveground biomass stocks in lowland rain forest, Papua New Guinea.
Bryan, Jane; Shearman, Phil; Ash, Julian; Kirkpatrick, J B
2010-12-01
Greenhouse-gas emissions resulting from logging are poorly quantified across the tropics. There is a need for robust measurement of rain forest biomass and the impacts of logging from which carbon losses can be reliably estimated at regional and global scales. We used a modified Bitterlich plotless technique to measure aboveground live biomass at six unlogged and six logged rain forest areas (coupes) across two approximately 3000-ha regions at the Makapa concession in lowland Papua New Guinea. "Reduced-impact logging" is practiced at Makapa. We found the mean unlogged aboveground biomass in the two regions to be 192.96 +/- 4.44 Mg/ha and 252.92 +/- 7.00 Mg/ha (mean +/- SE), which was reduced by logging to 146.92 +/- 4.58 Mg/ha and 158.84 +/- 4.16, respectively. Killed biomass was not a fixed proportion, but varied with unlogged biomass, with 24% killed in the lower-biomass region, and 37% in the higher-biomass region. Across the two regions logging resulted in a mean aboveground carbon loss of 35 +/- 2.8 Mg/ha. The plotless technique proved efficient at estimating mean aboveground biomass and logging damage. We conclude that substantial bias is likely to occur within biomass estimates derived from single unreplicated plots.
Interlake production established using quantitative hydrocarbon well-log analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lancaster, J.; Atkinson, A.
1988-07-01
Production was established in a new pay zone of the basal Interlake Formation adjacent to production in Midway field in Williams County, North Dakota. Hydrocarbon saturation, which was computed using hydrocarbon well-log (mud-log) data, and computed permeability encouraged the operator to run casing and test this zone. By use of drilling rig parameters, drilling mud properties, hydrocarbon-show data from the mud log, drilled rock and porosity descriptions, and wireline log porosity, this new technique computes oil saturation (percent of porosity) and permeability to the invading filtrate, using the Darcy equation. The Leonardo Fee well was drilled to test the Devonianmore » Duperow, the Silurian upper Interlake, and the Ordovician Red River. The upper two objectives were penetrated downdip from Midway production and there were no hydrocarbon shows. It was determined that the Red River was tight, based on sample examination by well site personnel. The basal Interlake, however, liberated hydrocarbon shows that were analyzed by this new technology. The results of this evaluation accurately predicted this well would be a commercial success when placed in production. Where geophysical log analysis might be questionable, this new evaluation technique may provide answers to anticipated oil saturation and producibility. The encouraging results of hydrocarbon saturation and permeability, produced by this technique, may be largely responsible for the well being in production today.« less
Postfire logging: is it beneficial to a forest?
Sally Duncan
2002-01-01
Public debate on postfire logging has intensified in recent years, particularly since passage of the "salvage rider" in 1995, directing accelerated harvest of dead trees in the western United States. Supporters of postfires logging argue that it is part of a suite of restoration techniques, and that removal of timber means reduction of fuels for...
Walsh, David O; Turner, Peter
2014-05-27
Technologies including NMR logging apparatus and methods are disclosed. Example NMR logging apparatus may include surface instrumentation and one or more downhole probes configured to fit within an earth borehole. The surface instrumentation may comprise a power amplifier, which may be coupled to the downhole probes via one or more transmission lines, and a controller configured to cause the power amplifier to generate a NMR activating pulse or sequence of pulses. Impedance matching means may be configured to match an output impedance of the power amplifier through a transmission line to a load impedance of a downhole probe. Methods may include deploying the various elements of disclosed NMR logging apparatus and using the apparatus to perform NMR measurements.
Simulations of Ground Motion in Southern California based upon the Spectral-Element Method
NASA Astrophysics Data System (ADS)
Tromp, J.; Komatitsch, D.; Liu, Q.
2003-12-01
We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.
A.W. Sump
1947-01-01
A plentiful supply of pine and cedar logs provided the early settlers of this country with a cheap and durable material for the construction of their homes and farm buildings. Only the axe and the ingenuity of the pioneer were needed to erect a shelter against the elements of nature. Early in the 19th century, the circular saw came into use resulting in a change in...
Protecting log cabins from decay
R. M. Rowell; J. M. Black; L. R. Gjovik; W. C. Feist
1977-01-01
This report answers the questions most often asked of the Forest Service on the protection of log cabins from decay, and on practices for the exterior finishing and maintenance of existing cabins. Causes of stain and decay are discussed, as are some basic techniques for building a cabin that will minimize decay. Selection and handling of logs, their preservative...
ERIC Educational Resources Information Center
Kheng, Yeoh Khar
2017-01-01
Purpose: This study is part of the Scholarship of Teaching and Learning (SoTL) grant to examine written reflective learning log among the students studying BPME 3073 Entrepreneurship in UUM. Method: The data collection techniques is researcher-directed textual data through reflective learning log; obtained from students of one hundred forty. A…
Internal log scanning: Research to reality
Daniel L. Schmoldt
2000-01-01
Improved log breakdown into lumber has been an active research topic since the 1960's. Demonstrated economic gains have driven the search for a cost-effective method to scan logs internally, from which it is assumed one can chose a better breakdown strategy. X-ray computed tomography (CT) has been widely accepted as the most promising internal imaging technique....
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-29
... the respondents, including the use of automated collection techniques or other forms of information...: OMB Control Number: 3060-0360. Title: Section 80.409, Station Logs. Form No.: N/A. Type of Review... for filing suits upon such claims. Section 80.409(d), Ship Radiotelegraph Logs: Logs of ship stations...
Subsurface Formation Evaluation on Mars: Application of Methods from the Oil Patch
NASA Astrophysics Data System (ADS)
Passey, Q. R.
2006-12-01
The ability to drill 10- to 100-meter deep wellbores on Mars would allow for evaluation of shallow subsurface formations enabling the extension of current interpretations of the geologic history of this planet; moreover, subsurface access is likely to provide direct evidence to determine if water or permafrost is present. Methodologies for evaluating sedimentary rocks using drill holes and in situ sample and data acquisition are well developed here on Earth. Existing well log instruments can measure K, Th, and U from natural spectral gamma-ray emission, compressional and shear acoustic velocities, electrical resistivity and dielectric properties, bulk density (Cs-137 or Co-60 source), photoelectric absorption of gamma-rays (sensitive to the atomic number), hydrogen index from epithermal and thermal neutron scattering and capture, free hydrogen in water molecules from nuclear magnetic resonance, formation capture cross section, temperature, pressure, and elemental abundances (C, O, Si, Ca, H, Cl, Fe, S, and Gd) using 14 MeV pulsed neutron activation more elements possible with supercooled Ge detectors. Additionally, high-resolution wellbore images are possible using a variety of optical, electrical, and acoustic imaging tools. In the oil industry, these downhole measurements are integrated to describe potential hydrocarbon reservoir properties: lithology, mineralogy, porosity, depositional environment, sedimentary and structural dip, sedimentary features, fluid type (oil, gas, or water), and fluid amount (i.e., saturation). In many cases it is possible to determine the organic-carbon content of hydrocarbon source rocks from logs (if the total organic carbon content is 1 wt% or greater), and more accurate instruments likely could be developed. Since Martian boreholes will likely be drilled without using opaque drilling fluids (as generally used in terrestrial drilling), additional instruments can be used such as high resolution direct downhole imaging and other surface contact measurements (such as IR spectroscopy and x-ray fluorescence). However, such wellbores would require modification of some instruments since conventional drilling fluids often provide the coupling of the instrument sensors to the formation (e.g., sonic velocity and galvanic resistivity measurements). The ability to drill wellbores on Mars opens up new opportunities for exploration but also introduces additional technical challenges. Currently it is not known if all existing terrestrial logging instruments can be miniaturized sufficiently for a shallow Mars wellbore, but the existing well logging techniques and instruments provide a solid framework on which to build a Martian subsurface evaluation program.
NASA Astrophysics Data System (ADS)
Gholizadeh Doonechaly, N.; Rahman, S. S.
2012-05-01
Simulation of naturally fractured reservoirs offers significant challenges due to the lack of a methodology that can utilize field data. To date several methods have been proposed by authors to characterize naturally fractured reservoirs. Among them is the unfolding/folding method which offers some degree of accuracy in estimating the probability of the existence of fractures in a reservoir. Also there are statistical approaches which integrate all levels of field data to simulate the fracture network. This approach, however, is dependent on the availability of data sources, such as seismic attributes, core descriptions, well logs, etc. which often make it difficult to obtain field wide. In this study a hybrid tectono-stochastic simulation is proposed to characterize a naturally fractured reservoir. A finite element based model is used to simulate the tectonic event of folding and unfolding of a geological structure. A nested neuro-stochastic technique is used to develop the inter-relationship between the data and at the same time it utilizes the sequential Gaussian approach to analyze field data along with fracture probability data. This approach has the ability to overcome commonly experienced discontinuity of the data in both horizontal and vertical directions. This hybrid technique is used to generate a discrete fracture network of a specific Australian gas reservoir, Palm Valley in the Northern Territory. Results of this study have significant benefit in accurately describing fluid flow simulation and well placement for maximal hydrocarbon recovery.
The spectroscopic indistinguishability of red giant branch and red clump stars
NASA Astrophysics Data System (ADS)
Masseron, T.; Hawkins, K.
2017-01-01
Context. Stellar spectroscopy provides useful information on the physical properties of stars such as effective temperature, metallicity and surface gravity. However, those photospheric characteristics are often hampered by systematic uncertainties. The joint spectro-sismo project (APOGEE+Kepler, aka APOKASC) of field red giants has revealed a puzzling offset between the surface gravities (log g) determined spectroscopically and those determined using asteroseismology, which is largely dependent on the stellar evolutionary status. Aims: Therefore, in this letter, we aim to shed light on the spectroscopic source of the offset. Methods: We used the APOKASC sample to analyse the dependencies of the log g discrepancy as a function of stellar mass and stellar evolutionary status. We discuss and study the impact of some neglected abundances on spectral analysis of red giants, such as He and carbon isotopic ratio. Results: We first show that, for stars at the bottom of the red giant branch where the first dredge-up had occurred, the discrepancy between spectroscopic log g and asteroseismic log g depends on stellar mass. This seems to indicate that the log g discrepancy is related to CN cycling. Among the CN-cycled elements, we demonstrate that the carbon isotopic ratio (12C /13C) has the largest impact on stellar spectrum. In parallel, we observe that this log g discrepancy shows a similar trend as the 12C /13C ratios as expected by stellar evolution theory. Although we did not detect a direct spectroscopic signature of 13C, other corroborating evidences suggest that the discrepancy in log g is tightly correlated to the production of 13C in red giants. Moreover, by running the data-driven algorithm (the Cannon) on a synthetic grid trained on the APOGEE data, we try to evaluate more quantitatively the impact of various 12C /13C ratios. Conclusions: While we have demonstrated that 13C indeed impacts all parameters, the size of the impact is smaller than the observed offset in log g. If further tests confirm that 13C is not the main element responsible of the log g problem, the number of spectroscopic effects remaining to be investigated is now relatively limited (if any).
29 CFR 1910.266 - Logging operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... employee against contact with a running chain saw. Sharp, calk-soled boots or other slip-resistant type... (C) Each moving element such as, but not limited to blades, buckets, saws and shears, shall be... moving elements such as, but not limited to, blades, buckets, saws and shears, after the machine is shut...
29 CFR 1910.266 - Logging operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... employee against contact with a running chain saw. Sharp, calk-soled boots or other slip-resistant type... (C) Each moving element such as, but not limited to blades, buckets, saws and shears, shall be... moving elements such as, but not limited to, blades, buckets, saws and shears, after the machine is shut...
29 CFR 1910.266 - Logging operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... employee against contact with a running chain saw. Sharp, calk-soled boots or other slip-resistant type... (C) Each moving element such as, but not limited to blades, buckets, saws and shears, shall be... moving elements such as, but not limited to, blades, buckets, saws and shears, after the machine is shut...
29 CFR 1910.266 - Logging operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... employee against contact with a running chain saw. Sharp, calk-soled boots or other slip-resistant type... (C) Each moving element such as, but not limited to blades, buckets, saws and shears, shall be... moving elements such as, but not limited to, blades, buckets, saws and shears, after the machine is shut...
29 CFR 1910.266 - Logging operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... employee against contact with a running chain saw. Sharp, calk-soled boots or other slip-resistant type... (C) Each moving element such as, but not limited to blades, buckets, saws and shears, shall be... moving elements such as, but not limited to, blades, buckets, saws and shears, after the machine is shut...
Frederick Boltz; Douglas R. Carter; Thomas P. Holmes; Rodrigo Pereira
2001-01-01
Reduced-impact logging (RIL) techniques are designed to improve the efficiency of timber harvesting while mitigating its adverse effects on the forest ecosystem. Research on RIL in select tropical forest regions has demonstrated clear ecological benefits relative to conventional logging (CL) practices while the financial competitiveness of RIL is less conclusive. We...
Boat-Wave-Induced Bank Erosion on the Kenai River, Alaska
2008-03-01
with coir log habitat restoration. .....................................................................75 Figure 51. Type 1 bank with willow...various types of streambank stabilization. Common stabilization techniques consist of root wads, spruce tree revetments, coir logs, and riprap...restoration. ERDC TR-08-5 75 Figure 50. Type 1 bank with coir log habitat restoration. Figure 51. Type 1 bank with willow plantings/ladder access habitat
Break-even zones for cable yarding by log size
Chris B. LeDoux
1984-01-01
The use of cable logging to extract small pieces of residue wood may result in low rates of production and a high cost per unit of wood produced. However, the logging manager can improve yarding productivity and break-even in cable residue removal operations by using the proper planning techniques. In this study, break-even zones for specific young-growth stands were...
Discrete event simulation tool for analysis of qualitative models of continuous processing systems
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)
1990-01-01
An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.
NASA Astrophysics Data System (ADS)
Pathak, P. N.; Mohapatra, M.; Godbole, S. V.
2013-11-01
UREX process has been proposed for selective extraction of U(VI) and Tc(VII) from nitric acid medium (∼1 M HNO3) using tri-n-butyl phosphate (TBP) as extractant and retaining Pu, Np and fission products in the aqueous phase. The feasibility of the use of luminescence spectroscopy as a technique to understand the complexation of trivalent f-elements cations viz. Eu(III) and Tb(III) with acetohydroxamic acid (AHA) in nitric acid medium has been examined. The luminescence lifetimes for the 1 × 10-3 M Eu(III) and AHA complex system decreased with increased AHA concentration from 116 ± 0.2 μs (no AHA) to 1.6 ± 0.1 μs (0.1 M AHA) which was attributed to dynamic quenching. The corrected fluorescence intensities were used to calculate the stability constant (log K) for the formation of 1:1 Eu3+-AHA complex as 1.42 ± 0.64 under the conditions of this study. By contrast, the Tb(III)-AHA system at pH 3 (HNO3) did not show any significant variation in the life times of the excited state (364 ± 9 μs) suggesting the absence of dynamic quenching. The spectral changes in Tb(III)-AHA system showed the formation of 1:1 complex (log K: 1.72 ± 0.21). These studies suggest that the extent of AHA complexation with the rare earth elements will be insignificant as compared to tetravalent metal ions Pu(IV) and Np(IV) under UREX process conditions.
WE-A-BRE-01: Debate: To Measure or Not to Measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, J; Miften, M; Mihailidis, D
2014-06-15
Recent studies have highlighted some of the limitations of patient-specific pre-treatment IMRT QA measurements with respect to assessing plan deliverability. Pre-treatment QA measurements are frequently performed with detectors in phantoms that do not involve any patient heterogeneities or with an EPID without a phantom. Other techniques have been developed where measurement results are used to recalculate the patient-specific dose volume histograms. Measurements continue to play a fundamental role in understanding the initial and continued performance of treatment planning and delivery systems. Less attention has been focused on the role of computational techniques in a QA program such as calculation withmore » independent dose calculation algorithms or recalculation of the delivery with machine log files or EPID measurements. This session will explore the role of pre-treatment measurements compared to other methods such as computational and transit dosimetry techniques. Efficiency and practicality of the two approaches will also be presented and debated. The speakers will present a history of IMRT quality assurance and debate each other regarding which types of techniques are needed today and for future quality assurance. Examples will be shared of situations where overall quality needed to be assessed with calculation techniques in addition to measurements. Elements where measurements continue to be crucial such as for a thorough end-to-end test involving measurement will be discussed. Operational details that can reduce the gamma tool effectiveness and accuracy for patient-specific pre-treatment IMRT/VMAT QA will be described. Finally, a vision for the future of IMRT and VMAT plan QA will be discussed from a safety perspective. Learning Objectives: Understand the advantages and limitations of measurement and calculation approaches for pre-treatment measurements for IMRT and VMAT planning Learn about the elements of a balanced quality assurance program involving modulated techniques Learn how to use tools and techniques such as an end-to-end test to enhance your IMRT and VMAT QA program.« less
Furuhama, A; Hasunuma, K; Aoki, Y
2015-01-01
In addition to molecular structure profiles, descriptors based on physicochemical properties are useful for explaining the eco-toxicities of chemicals. In a previous study we reported that a criterion based on the difference between the partition coefficient (log POW) and distribution coefficient (log D) values of chemicals enabled us to identify aromatic amines and phenols for which interspecies relationships with strong correlations could be developed for fish-daphnid and algal-daphnid toxicities. The chemicals that met the log D-based criterion were expected to have similar toxicity mechanisms (related to membrane penetration). Here, we investigated the applicability of log D-based criteria to the eco-toxicity of other kinds of chemicals, including aliphatic compounds. At pH 10, use of a log POW - log D > 0 criterion and omission of outliers resulted in the selection of more than 100 chemicals whose acute fish toxicities or algal growth inhibition toxicities were almost equal to their acute daphnid toxicities. The advantage of log D-based criteria is that they allow for simple, rapid screening and prioritizing of chemicals. However, inorganic molecules and chemicals containing certain structural elements cannot be evaluated, because calculated log D values are unavailable.
Proposed standard-weight (W(s)) equations for kokanee, golden trout and bull trout
Hyatt, M.H.; Hubert, W.A.
2000-01-01
We developed standard-weight (W(s)) equations for kokanee (lacustrine Oncorhynchus nerka), golden trout (O. aguabonita), and bull trout (Salvelinus confluentus) using the regression-line-percentile technique. The W(s) equation for kokanee of 120-550 mm TL is log10 W(s) = -5.062 + 3.033 log10 TL, when W(s) is in grams and TL is total length in millimeters; the English-unit equivalent is log10 W(s) = -3.458 + 3.033 log10 TL, when W(s) is in pounds and TL is total length in inches. The W(s) equation for golden trout of 120-530 mm TL is log10 W(s) = -5.088 + 3.041 log10 TL, with the English-unit equivalent being log10 W(s) = -3.473 + 3.041 log10 TL. The W(s) equation for bull trout of 120-850 mm TL is log10 W(s) = -5.327 + 3.115 log10 TL, with the English-unit equivalent being log10 W(s) = -3.608 + 3.115 log10 TL.
SpaceOps 2012 Plus 2: Social Tools to Simplify ISS Flight Control Communications and Log Keeping
NASA Technical Reports Server (NTRS)
Cowart, Hugh S.; Scott, David W.
2014-01-01
A paper written for the SpaceOps 2012 Conference (Simplify ISS Flight Control Communications and Log Keeping via Social Tools and Techniques) identified three innovative concepts for real time flight control communications tools based on social mechanisms: a) Console Log Tool (CoLT) - A log keeping application at Marshall Space Flight Center's (MSFC) Payload Operations Integration Center (POIC) that provides "anywhere" access, comment and notifications features similar to those found in Social Networking Systems (SNS), b) Cross-Log Communication via Social Techniques - A concept from Johnsson Space Center's (JSC) Mission Control Center Houston (MCC-H) that would use microblogging's @tag and #tag protocols to make information/requests visible and/or discoverable in logs owned by @Destination addressees, and c) Communications Dashboard (CommDash) - A MSFC concept for a Facebook-like interface to visually integrate and manage basic console log content, text chat streams analogous to voice loops, text chat streams dedicated to particular conversations, generic and position-specific status displays/streams, and a graphically based hailing display. CoLT was deployed operationally at nearly the same time as SpaceOps 2012, the Cross- Log Communications idea is currently waiting for a champion to carry it forward, and CommDash was approved as a NASA Iinformation Technoloby (IT) Labs project. This paper discusses lessons learned from two years of actual CoLT operations, updates CommDash prototype development status, and discusses potential for using Cross-Log Communications in both MCC-H and/or POIC environments, and considers other ways for synergizing console applcations.
Laser optogalvanic spectroscopy of molecules
NASA Technical Reports Server (NTRS)
Webster, C. R.; Rettner, C. T.
1983-01-01
In laser optogalvanic (LOG) spectroscopy, a tunable laser is used to probe the spectral characteristics of atomic or molecular species within an electrical discharge in a low pressure gas. Optogalvanic signals arise when the impedance of the discharge changes in response to the absorption of laser radiation. The technique may, therefore, be referred to as impedance spectroscopy. This change in impedance may be monitored as a change in the voltage across the discharge tube. LOG spectra are recorded by scanning the wavelength of a chopped CW dye laser while monitoring the discharge voltage with a lock-in amplifier. LOG signals are obtained if the laser wavelength matches a transition in a species present in the discharge (or flame), and if the absorption of energy in the laser beam alters the impedance of the discharge. Infrared LOG spectroscopy of molecules has been demonstrated and may prove to be the most productive application in the field of optogalvanic techniques.
Mass Storage Performance Information System
NASA Technical Reports Server (NTRS)
Scheuermann, Peter
2000-01-01
The purpose of this task is to develop a data warehouse to enable system administrators and their managers to gather information by querying the data logs of the MDSDS. Currently detailed logs capture the activity of the MDSDS internal to the different systems. The elements to be included in the data warehouse are requirements analysis, data cleansing, database design, database population, hardware/software acquisition, data transformation, query and report generation, and data mining.
Jang, Jae-Wook; Yun, Jaesung; Mohaisen, Aziz; Woo, Jiyoung; Kim, Huy Kang
2016-01-01
Mass-market mobile security threats have increased recently due to the growth of mobile technologies and the popularity of mobile devices. Accordingly, techniques have been introduced for identifying, classifying, and defending against mobile threats utilizing static, dynamic, on-device, and off-device techniques. Static techniques are easy to evade, while dynamic techniques are expensive. On-device techniques are evasion, while off-device techniques need being always online. To address some of those shortcomings, we introduce Andro-profiler, a hybrid behavior based analysis and classification system for mobile malware. Andro-profiler main goals are efficiency, scalability, and accuracy. For that, Andro-profiler classifies malware by exploiting the behavior profiling extracted from the integrated system logs including system calls. Andro-profiler executes a malicious application on an emulator in order to generate the integrated system logs, and creates human-readable behavior profiles by analyzing the integrated system logs. By comparing the behavior profile of malicious application with representative behavior profile for each malware family using a weighted similarity matching technique, Andro-profiler detects and classifies it into malware families. The experiment results demonstrate that Andro-profiler is scalable, performs well in detecting and classifying malware with accuracy greater than 98 %, outperforms the existing state-of-the-art work, and is capable of identifying 0-day mobile malware samples.
Design and Evaluation of Log-To-Dimension Manufacturing Systems Using System Simulation
Wenjie Lin; D. Earl Kline; Philip A. Araman; Janice K. Wiedenbeck
1995-01-01
In a recent study of alternative dimension manufacturing systems that produce green hardwood dimension directly fromlogs, it was observed that for Grade 2 and 3 red oak logs, up to 78 and 76 percent of the log scale volume could be converted into clear dimension parts. The potential high yields suggest that this processing system can be a promising technique for...
Marco W Lentini; Johan C Zweede; Thomas P Holmes
2010-01-01
Sound forest management practices have been seen as an interesting strategy to ally forest conservation and rural economic development in Amazônia. However, the implementation of Reduced Impact Logging (RIL) techniques in the field has been incipient, while most of the Amazonian timber production is generated through predatory and illegal logging. Despite several...
Permeability Estimation Directly From Logging-While-Drilling Induced Polarization Data
NASA Astrophysics Data System (ADS)
Fiandaca, G.; Maurya, P. K.; Balbarini, N.; Hördt, A.; Christiansen, A. V.; Foged, N.; Bjerg, P. L.; Auken, E.
2018-04-01
In this study, we present the prediction of permeability from time domain spectral induced polarization (IP) data, measured in boreholes on undisturbed formations using the El-log logging-while-drilling technique. We collected El-log data and hydraulic properties on unconsolidated Quaternary and Miocene deposits in boreholes at three locations at a field site in Denmark, characterized by different electrical water conductivity and chemistry. The high vertical resolution of the El-log technique matches the lithological variability at the site, minimizing ambiguity in the interpretation originating from resolution issues. The permeability values were computed from IP data using a laboratory-derived empirical relationship presented in a recent study for saturated unconsolidated sediments, without any further calibration. A very good correlation, within 1 order of magnitude, was found between the IP-derived permeability estimates and those derived using grain size analyses and slug tests, with similar depth trends and permeability contrasts. Furthermore, the effect of water conductivity on the IP-derived permeability estimations was found negligible in comparison to the permeability uncertainties estimated from the inversion and the laboratory-derived empirical relationship.
Well Monitoring System For EGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Normann, Randy; Glowka, Dave; Normann, Charles
This grant is a collection of projects designed to move aircraft high temperature electronics technology into the geothermal industry. Randy Normann is the lead. He licensed the HT83SNL00 chip from Sandia National Labs. This chip enables aircraft developed electronics for work within a geothermal well logging tool. However, additional elements are needed to achieve commercially successful logging tools. These elements are offered by a strong list of industrial partners on this grant as: Electrochemical Systems Inc. for HT Rechargeable Batteries, Frequency Management Systems for 300C digital clock, Sandia National Labs for experts in high temperature solder, Honeywell Solid-State Electronics Centermore » for reprogrammable high temperature memory. During the course of this project MagiQ Technologies for high temperature fiber optics.« less
Application of work sampling technique to analyze logging operations.
Edwin S. Miyata; Helmuth M. Steinhilb; Sharon A. Winsauer
1981-01-01
Discusses the advantages and disadvantages of various time study methods for determining efficiency and productivity in logging. The work sampling method is compared with the continuous time-study method. Gives the feasibility, capability, and limitation of the work sampling method.
Robert J. Ross; Susan W. Willits; William Von Segen; Terry Black; Brian K. Brashaw; Roy F. Pellerin
1999-01-01
Longitudinal stress wave nondestructive evaluation (NDE) techniques have been used in a variety of applications in the forest products industry. Recently, it has been shown that they can significantly aid in the assessment of log quality, particularly when they are used to predict performance of structural lumber obtained from a log. The purpose of the research...
Effect of HF Heating Array Directivity Pattern on the Frequency Response of Generated ELF/VLF.
1983-01-01
radiators ....... ............ 4 1-2 HF heating array ........ ................... 9 1-3 HF heating array element ...... ................ 9 1-4 View of top...elements looking down at pyramid ....... 9 1-5 Non-planar log-periodic antenna semi-structure dimensions ............ . ....... 10 l-6a Power gain vs...22 1-8 Orientation of 4- and 8-element arrays .. ......... .. 24 1- 9 Comparison of experimental and theoretical patterns. . . 27 1-10 Directive
Wegmann, Markus; Michen, Benjamin; Luxbacher, Thomas; Fritsch, Johannes; Graule, Thomas
2008-03-01
The purpose of this study was to test the feasibility of modifying commercial microporous ceramic bacteria filters to promote adsorption of viruses. The internal surface of the filter medium was coated with ZrO(2) nanopowder via dip-coating and heat-treatment in order to impart a filter surface charge opposite to that of the target viruses. Streaming potential measurements revealed a shift in the isoelectric point from pH <3 to between pH 5.5 and 9, respectively. While the base filter elements generally exhibited only 75% retention with respect to MS2 bacteriophages, the modified elements achieved a 7log removal (99.99999%) of these virus-like particles. The coating process also increased the specific surface area of the filters from approximately 2m(2)/g to between 12.5 and 25.5m(2)/g, thereby also potentially increasing their adsorption capacity. The results demonstrate that, given more development effort, the chosen manufacturing process has the potential to yield effective virus filters with throughputs superior to those of current virus filtration techniques.
NASA Astrophysics Data System (ADS)
Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K.
2015-09-01
Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination) were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii) how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-)investigations.
Bhattacharya, S.; Doveton, J.H.; Carr, T.R.; Guy, W.R.; Gerlach, P.M.
2005-01-01
Small independent operators produce most of the Mississippian carbonate fields in the United States mid-continent, where a lack of integrated characterization studies precludes maximization of hydrocarbon recovery. This study uses integrative techniques to leverage extant data in an Osagian and Meramecian (Mississippian) cherty carbonate reservoir in Kansas. Available data include petrophysical logs of varying vintages, limited number of cores, and production histories from each well. A consistent set of assumptions were used to extract well-level porosity and initial saturations, from logs of different types and vintages, to build a geomodel. Lacking regularly recorded well shut-in pressures, an iterative technique, based on material balance formulations, was used to estimate average reservoir-pressure decline that matched available drillstem test data and validated log-analysis assumptions. Core plugs representing the principal reservoir petrofacies provide critical inputs for characterization and simulation studies. However, assigning plugs among multiple reservoir petrofacies is difficult in complex (carbonate) reservoirs. In a bottom-up approach, raw capillary pressure (Pc) data were plotted on the Super-Pickett plot, and log- and core-derived saturation-height distributions were reconciled to group plugs by facies, to identify core plugs representative of the principal reservoir facies, and to discriminate facies in the logged interval. Pc data from representative core plugs were used for effective pay evaluation to estimate water cut from completions, in infill and producing wells, and guide-selective perforations for economic exploitation of mature fields. The results from this study were used to drill 22 infill wells. Techniques demonstrated here can be applied in other fields and reservoirs. Copyright ?? 2005. The American Association of Petroleum Geologists. All rights reserved.
NASA Astrophysics Data System (ADS)
Maity, Debotyam
This study is aimed at an improved understanding of unconventional reservoirs which include tight reservoirs (such as shale oil and gas plays), geothermal developments, etc. We provide a framework for improved fracture zone identification and mapping of the subsurface for a geothermal system by integrating data from different sources. The proposed ideas and methods were tested primarily on data obtained from North Brawley geothermal field and the Geysers geothermal field apart from synthetic datasets which were used to test new algorithms before actual application on the real datasets. The study has resulted in novel or improved algorithms for use at specific stages of data acquisition and analysis including improved phase detection technique for passive seismic (and teleseismic) data as well as optimization of passive seismic surveys for best possible processing results. The proposed workflow makes use of novel integration methods as a means of making best use of the available geophysical data for fracture characterization. The methodology incorporates soft computing tools such as hybrid neural networks (neuro-evolutionary algorithms) as well as geostatistical simulation techniques to improve the property estimates as well as overall characterization efficacy. The basic elements of the proposed characterization workflow involves using seismic and microseismic data to characterize structural and geomechanical features within the subsurface. We use passive seismic data to model geomechanical properties which are combined with other properties evaluated from seismic and well logs to derive both qualitative and quantitative fracture zone identifiers. The study has resulted in a broad framework highlighting a new technique for utilizing geophysical data (seismic and microseismic) for unconventional reservoir characterization. It provides an opportunity to optimally develop the resources in question by incorporating data from different sources and using their temporal and spatial variability as a means to better understand the reservoir behavior. As part of this study, we have developed the following elements which are discussed in the subsequent chapters: 1. An integrated characterization framework for unconventional settings with adaptable workflows for all stages of data processing, interpretation and analysis. 2. A novel autopicking workflow for noisy passive seismic data used for improved accuracy in event picking as well as for improved velocity model building. 3. Improved passive seismic survey design optimization framework for better data collection and improved property estimation. 4. Extensive post-stack seismic attribute studies incorporating robust schemes applicable in complex reservoir settings. 5. Uncertainty quantification and analysis to better quantify property estimates over and above the qualitative interpretations made and to validate observations independently with quantified uncertainties to prevent erroneous interpretations. 6. Property mapping from microseismic data including stress and anisotropic weakness estimates for integrated reservoir characterization and analysis. 7. Integration of results (seismic, microseismic and well logs) from analysis of individual data sets for integrated interpretation using predefined integration framework and soft computing tools.
METHOD AND APPARATUS FOR TESTING THE PRESENCE OF SPECIFIC ATOMIC ELEMENTS IN A SUBSTANCE
Putman, J.L.
1960-01-26
Detection of specific atomic elements in a substance and particularly the applicability to well logging are discussed. The principal novelty resides in the determination of several of the auxiliary energy peaks in addition to the main energy peak of the gamma-ray energy spectrum of a substance and comparison of such peaks to the spectrum of the specific atomic element being tested for. thus resulting in identification of same. The invention facilitates the identification of specific elements even when in the presence of other elements having similar gamma energy spectra as to the main energy peaks.
Redox state of the Archean mantle: Evidence from V partitioning in 3.5-2.4 Ga komatiites
NASA Astrophysics Data System (ADS)
Nicklas, Robert W.; Puchtel, Igor S.; Ash, Richard D.
2018-02-01
Oxygen fugacity of the mantle is a crucial thermodynamic parameter that controls such fundamental processes as planetary differentiation, mantle melting, and possible core-mantle exchange. Constraining the evolution of the redox state of the mantle is of paramount importance for understanding the chemical evolution of major terrestrial reservoirs, including the core, mantle, and atmosphere. In order to evaluate the secular evolution of the redox state of the mantle, oxygen fugacities of six komatiite systems, ranging in age from 3.48 to 2.41 Ga, were determined using high-precision partitioning data of the redox-sensitive element vanadium between liquidus olivine, chromite and komatiitic melt. The calculated oxygen fugacities range from -0.11 ± 0.30 ΔFMQ log units in the 3.48 Ga Komati system to +0.43 ± 0.26 ΔFMQ log units in the 2.41 Ga Vetreny system. Although there is a slight hint in the data for an increase in the oxygen fugacity of the mantle between 3.48 and 2.41 Ga, these values generally overlap within their respective uncertainties; they are also largely within the range of oxygen fugacity estimates for modern MORB lavas of +0.60 ± 0.30 ΔFMQ log units that we obtained using the same technique. Our results are consistent with the previous findings that argued for little change in the mantle oxygen fugacity since the early Archean and indicate that the mantle had reached its nearly-present day redox state by at least 3.48 Ga.
A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM
NASA Astrophysics Data System (ADS)
Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui
2014-12-01
Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Yongliang; Thakur, Punam; Borkowski, Marian
The dissociation constants of oxalic acid (Ox), and the stability constants of Am 3+, Cm 3+ and Eu 3+ with Ox 2– have been determined at 25 °C, over a range of concentration varying from 0.1 to 6.60 m NaClO4 using potentiometric titration and extraction techniques, respectively. The experimental data support the formation of complexes, M(Ox) n 3 – 2n, where (M = Am 3+, Cm 3+ and Eu 3+ and n = 1 and 2). The dissociation constant and the stability constant values measured as a function of NaClO 4 concentration were used to estimate the Pitzer parameters formore » the respective interactions of Am 3+, Cm 3+ and Eu 3+ with Ox. Furthermore, the stability constants data of Am 3+ –Ox measured in NaClO 4 and in NaCl solutions from the literature were simultaneously fitted in order to refine the existing actinide–oxalate complexation model that can be used universally in the safety assessment of radioactive waste disposal. The thermodynamic stability constant: log β 0 101 = 6.30 ± 0.06 and log β 0 102 = 10.84 ± 0.06 for Am 3+ was obtained by simultaneously fitting data in NaCl and NaClO 4 media. Additionally, log β 0 101 = 6.72 ± 0.08 and log β 0 102 = 11.05 ± 0.09 for the Cm 3+ and log β 0 101 = 6.67 ± 0.08 and log β 0 102 = 11.15 ± 0.09 for the Eu 3+ were calculated by extrapolation of data to zero ionic strength in NaClO 4 medium only. For all stability constants, the Pitzer model gives an excellent representation of the data using interaction parameters β (0), β (1), and CΦ determined in this work. The thermodynamic model developed in this work will be useful in accurately modeling the potential solubility of trivalent actinides and early lanthanides to ionic strength of 6.60 m in low temperature environments in the presence of Ox. Furthermore, the work is also applicable to the accurate modeling transport of rare earth elements in various environments under the surface conditions.« less
Xiong, Yongliang; Thakur, Punam; Borkowski, Marian
2015-07-30
The dissociation constants of oxalic acid (Ox), and the stability constants of Am 3+, Cm 3+ and Eu 3+ with Ox 2– have been determined at 25 °C, over a range of concentration varying from 0.1 to 6.60 m NaClO4 using potentiometric titration and extraction techniques, respectively. The experimental data support the formation of complexes, M(Ox) n 3 – 2n, where (M = Am 3+, Cm 3+ and Eu 3+ and n = 1 and 2). The dissociation constant and the stability constant values measured as a function of NaClO 4 concentration were used to estimate the Pitzer parameters formore » the respective interactions of Am 3+, Cm 3+ and Eu 3+ with Ox. Furthermore, the stability constants data of Am 3+ –Ox measured in NaClO 4 and in NaCl solutions from the literature were simultaneously fitted in order to refine the existing actinide–oxalate complexation model that can be used universally in the safety assessment of radioactive waste disposal. The thermodynamic stability constant: log β 0 101 = 6.30 ± 0.06 and log β 0 102 = 10.84 ± 0.06 for Am 3+ was obtained by simultaneously fitting data in NaCl and NaClO 4 media. Additionally, log β 0 101 = 6.72 ± 0.08 and log β 0 102 = 11.05 ± 0.09 for the Cm 3+ and log β 0 101 = 6.67 ± 0.08 and log β 0 102 = 11.15 ± 0.09 for the Eu 3+ were calculated by extrapolation of data to zero ionic strength in NaClO 4 medium only. For all stability constants, the Pitzer model gives an excellent representation of the data using interaction parameters β (0), β (1), and CΦ determined in this work. The thermodynamic model developed in this work will be useful in accurately modeling the potential solubility of trivalent actinides and early lanthanides to ionic strength of 6.60 m in low temperature environments in the presence of Ox. Furthermore, the work is also applicable to the accurate modeling transport of rare earth elements in various environments under the surface conditions.« less
Effectiveness of streambank-stabilization techniques along the Kenai River, Alaska
Dorava, Joseph M.
1999-01-01
The Kenai River in southcentral Alaska is the State's most popular sport fishery and an economically important salmon river that generates as much as $70 million annually. Boatwake-induced streambank erosion and the associated damage to riparian and riverine habitat present a potential threat to this fishery. Bank-stabilization techniques commonly in use along the Kenai River were selected for evaluation of their effectiveness at attenuating boatwakes and retarding streambank erosion. Spruce trees cabled to the bank and biodegradable man-made logs (called 'bio-logs') pinned to the bank were tested because they are commonly used techniques along the river. These two techniques were compared for their ability to reduce wake heights that strike the bank and to reduce erosion of bank material, as well as for the amount and quality of habitat they provide for juvenile chinook salmon. Additionally, an engineered bank-stabilization project was evaluated because this method of bank protection is being encouraged by managers of the river. During a test that included 20 controlled boat passes, the spruce trees and the bio-log provided a similar reduction in boatwake height and bank erosion; however, the spruce trees provided a greater amount of protective habitat than the bio-log. The engineered bank-stabilization project eroded less during nine boat passes and provided more protective cover than the adjacent unprotected natural bank. Features of the bank-stabilization techniques, such as tree limbs and willow plantings that extended into the water from the bank, attenuated the boatwakes, which helped reduce erosion. These features also provided protective cover to juvenile salmon.
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper parallel 0(log N) algorithms for dynamic simulation of single closed-chain rigid multibody system as specialized to the case of a robot manipulatoar in contact with the environment are developed.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Wang, Liang; Mao, Zhiqiang; Sun, Zhongchun; Luo, Xingping; Song, Yong; Liu, Zhen
2013-01-01
In the Junggar basin, northwest China, many high gamma-ray (GR) sandstone reservoirs are found and routinely interpreted as mudstone non-reservoirs, with negative implications for the exploration and exploitation of oil and gas. Then, the high GR sandstone reservoirs' recognition principles, genesis, and log evaluation techniques are systematically studied. Studies show that the sandstone reservoirs with apparent shale content greater than 50% and GR value higher than 110API can be regarded as high GR sandstone reservoir. The high GR sandstone reservoir is mainly and directly caused by abnormally high uranium enrichment, but not the tuff, feldspar or clay mineral. Affected by formation's high water sensitivity and poor borehole quality, the conventional logs can not recognize reservoir and evaluate the physical property of reservoirs. Then, the nuclear magnetic resonance (NMR) logs is proposed and proved to be useful in reservoir recognition and physical property evaluation.
Wang, Liang; Mao, Zhiqiang; Sun, Zhongchun; Luo, Xingping; Song, Yong; Liu, Zhen
2013-01-01
In the Junggar basin, northwest China, many high gamma-ray (GR) sandstone reservoirs are found and routinely interpreted as mudstone non-reservoirs, with negative implications for the exploration and exploitation of oil and gas. Then, the high GR sandstone reservoirs' recognition principles, genesis, and log evaluation techniques are systematically studied. Studies show that the sandstone reservoirs with apparent shale content greater than 50% and GR value higher than 110API can be regarded as high GR sandstone reservoir. The high GR sandstone reservoir is mainly and directly caused by abnormally high uranium enrichment, but not the tuff, feldspar or clay mineral. Affected by formation's high water sensitivity and poor borehole quality, the conventional logs can not recognize reservoir and evaluate the physical property of reservoirs. Then, the nuclear magnetic resonance (NMR) logs is proposed and proved to be useful in reservoir recognition and physical property evaluation. PMID:24078797
Active Neutron and Gamma-Ray Instrumentation for In Situ Planetary Science Applications
NASA Technical Reports Server (NTRS)
Parsons, A.; Bodnarik, J.; Evans, L.; Floyd, A.; Lim, L.; McClanahan, T.; Namkung, M.; Nowicki, S.; Schweitzer, J.; Starr, R.;
2011-01-01
We describe the development of an instrument capable of detailed in situ bulk geochemical analysis of the surface of planets, moons, asteroids, and comets. This instrument technology uses a pulsed neutron generator to excite the solid materials of a planet and measures the resulting neutron and gamma-ray emission with its detector system. These time-resolved neutron and gamma-ray data provide detailed information about the bulk elemental composition, chemical context, and density distribution of the soil within 50 cm of the surface. While active neutron scattering and neutron-induced gamma-ray techniques have been used extensively for terrestrial nuclear well logging applications, our goal is to apply these techniques to surface instruments for use on any solid solar system body. As described, experiments at NASA Goddard Space Flight Center use a prototype neutron-induced gamma-ray instrument and the resulting data presented show the promise of this technique for becoming a versatile, robust, workhorse technology for planetary science, and exploration of any of the solid bodies in the solar system. The detection of neutrons at the surface also provides useful information about the material. This paper focuses on the data provided by the gamma-ray detector.
Proposed standard-weight equations for brook trout
Hyatt, M.W.; Hubert, W.A.
2001-01-01
Weight and length data were obtained for 113 populations of brook trout Salvelinus fontinalis across the species' geographic range in North America to estimate a standard-weight (Ws) equation for this species. Estimation was done by applying the regression-line-percentile technique to fish of 120-620 mm total length (TL). The proposed metric-unit (g and mm) equation is log10Ws = -5.186 + 3.103 log10TL; the English-unit (lb and in) equivalent is log10Ws = -3.483 + 3.103 log10TL. No systematic length bias was evident in the relative-weight values calculated from these equations.
Pathak, P N; Mohapatra, M; Godbole, S V
2013-11-01
UREX process has been proposed for selective extraction of U(VI) and Tc(VII) from nitric acid medium (∼1M HNO3) using tri-n-butyl phosphate (TBP) as extractant and retaining Pu, Np and fission products in the aqueous phase. The feasibility of the use of luminescence spectroscopy as a technique to understand the complexation of trivalent f-elements cations viz. Eu(III) and Tb(III) with acetohydroxamic acid (AHA) in nitric acid medium has been examined. The luminescence lifetimes for the 1×10(-3)M Eu(III) and AHA complex system decreased with increased AHA concentration from 116±0.2μs (no AHA) to 1.6±0.1μs (0.1M AHA) which was attributed to dynamic quenching. The corrected fluorescence intensities were used to calculate the stability constant (log K) for the formation of 1:1 Eu(3+)-AHA complex as 1.42±0.64 under the conditions of this study. By contrast, the Tb(III)-AHA system at pH 3 (HNO3) did not show any significant variation in the life times of the excited state (364±9μs) suggesting the absence of dynamic quenching. The spectral changes in Tb(III)-AHA system showed the formation of 1:1 complex (log K: 1.72±0.21). These studies suggest that the extent of AHA complexation with the rare earth elements will be insignificant as compared to tetravalent metal ions Pu(IV) and Np(IV) under UREX process conditions. Copyright © 2013 Elsevier B.V. All rights reserved.
Spectral Analysis of the sdO Standard Star Feige 34
NASA Astrophysics Data System (ADS)
Latour, M.; Chayer, P.; Green, E. M.; Fontaine, G.
2017-03-01
We present our current work on the spectral analysis of the hot sdO star Feige 34. We combine high S/N optical spectra and fully-blanketed non-LTE model atmospheres to derive its fundamental parameters (Teff, log g) and helium abundance. Our best fits indicate Teff = 63 000 K, log g = 6.0 and log N(He)/N(H) = -1.8. We also use available ultraviolet spectra (IUE and FUSE) to measure metal abundances. We find the star to be enriched in iron and nickel by a factor of ten with respect to the solar values, while lighter elements have subsolar abundances. The FUSE spectrum suggests that the spectral lines could be broadened by rotation.
Geophysical examination of coal deposits
NASA Astrophysics Data System (ADS)
Jackson, L. J.
1981-04-01
Geophysical techniques for the solution of mining problems and as an aid to mine planning are reviewed. Techniques of geophysical borehole logging are discussed. The responses of the coal seams to logging tools are easily recognized on the logging records. Cores for laboratory analysis are cut from selected sections of the borehole. In addition, information about the density and chemical composition of the coal may be obtained. Surface seismic reflection surveys using two dimensional arrays of seismic sources and detectors detect faults with throws as small as 3 m depths of 800 m. In geologically disturbed areas, good results have been obtained from three dimensional surveys. Smaller faults as far as 500 m in advance of the working face may be detected using in seam seismic surveying conducted from a roadway or working face. Small disturbances are detected by pulse radar and continuous wave electromagnetic methods either from within boreholes or from underground. Other geophysical techniques which explicit the electrical, magnetic, gravitational, and geothermal properties of rocks are described.
Relative trace-element concern indexes for eastern Kentucky coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, S.L.
Coal trace elements that could affect environmental quality were studied in 372 samples (collected and analyzed by the Kentucky Geological Survey and the United States Geological Survey) from 36 coal beds in eastern Kentucky. Relative trace-element concern indexes are defined as the weighted sum of standarized (substract mean; divide by standard deviation) concentrations. Index R is calculated from uranium and thorium, index 1 from elements of minor concern (antimony, barium, bromine, chloride, cobalt, lithium, manganese, sodium, and strontium), index 2 from elements of moderate concern (chromium, copper, fluorine, nickel, vanadium, and zinc), and index 4 from elements of greatest concernmore » (arsenic, boron, cadmium, lead, mercury, molybdenum, and selenium). Numericals indicate weights, except that index R is weighted by 1, and index 124 is the unweighted sum of indexes 1, 2, and 4. Contour mapping indexes is valid because all indexes have nonnugget effect variograms. Index 124 is low west of Lee and Bell counties, and in Pike County. Index 124 is high in the area bounded by Boyd, Menifee, Knott, and Martin counties and in Owsley, Clay, and Leslie counties. Coal from some areas of eastern Kentucky is less likely to cause environmental problems than that from other areas. Positive correlations of all indexes with the centered log ratios of ash, and negative correlations with centered log ratios of carbon, hydrogen, nitrogen, oxygen, and sulfur indicate that trace elements of concern are predominantly associated with ash. Beneficiation probably would reduce indexes significantly.« less
Using Log Linear Analysis for Categorical Family Variables.
ERIC Educational Resources Information Center
Moen, Phyllis
The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…
Barnard, Ralston W.; Jensen, Dal H.
1982-01-01
Uranium formations are assayed by prompt fission neutron logging techniques. The uranium in the formation is proportional to the ratio of epithermal counts to thermal or eqithermal dieaway. Various calibration factors enhance the accuracy of the measurement.
ERIC Educational Resources Information Center
Miles, Donna
2001-01-01
In response to high numbers of preventable fatal accidents in the logging industry, the Occupational Safety and Health Administration (OSHA) developed a week-long logger safety training program that includes hands-on learning of safety techniques in the woods. Reaching small operators has been challenging; outreach initiatives in Maine, North…
Dargó, Gergő; Boros, Krisztina; Péter, László; Malanga, Milo; Sohajda, Tamás; Szente, Lajos; Balogh, György T
2018-05-05
The present study was aimed to develop a medium-throughput screening technique for investigation of cyclodextrin (CD)-active pharmaceutical ingredient (API) complexes. Dual-phase potentiometric lipophilicity measurement, as gold standard technique, was combined with the partition coefficient method (plotting the reciprocal of partition coefficients of APIs as a function of CD concentration). A general equation was derived for determination of stability constants of 1:1 CD-API complexes (K 1:1,CD ) based on solely the changes of partition coefficients (logP o/w N -logP app N ), without measurement of the actual API concentrations. Experimentally determined logP value (-1.64) of 6-deoxy-6[(5/6)-fluoresceinylthioureido]-HPBCD (FITC-NH-HPBCD) was used to estimate the logP value (≈ -2.5 to -3) of (2-hydroxypropyl)-ß-cyclodextrin (HPBCD). The results suggested that the amount of HPBCD can be considered to be inconsequential in the octanol phase. The decrease of octanol volume due to the octanol-CD complexation was considered, thus a corrected octanol-water phase ratio was also introduced. The K 1:1,CD values obtained by this developed method showed a good accordance with the results from other orthogonal methods. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Al-Ziayyir, Haitham; Hodgetts, David
2015-04-01
The main reservoir in Rumaila /West Qurna oilfields is the Zubair Formation of Hautervian and Barremian age. This silicilastic formation extends over the regions of central and southern Iraq. This study attempts to improve the understanding of the architectural elements and their control on fluid flow paths within the Zubair Formation. A significant source of uncertainty in the zubair formation is the control on hydrodynamic pressure distribution. The reasons for pressure variation in the Zubair are not well understood. This work aims to reduce this uncertainty by providing a more detailed knowledge of reservoir architecture, distribution of barriers and baffles, and reservoir compartmentalization. To characterize the stratigraphic architecture of the Zubair formation,high resolution reservoir models that incorporate dynamic and static data were built. Facies modelling is accomplished by means of stochastic modelling techniques.The work is based on a large data set collected from the Rumaila oilfields. These data, comprising conventional logs of varying vintages, NMR logs, cores from six wells, and pressure data, were used for performing geological and petrophysical analyses.Flow simulation studies have also been applied to examine the impact of architecture on recovery. Understanding of geology and reservoir performance can be greatly improved by using an efficient, quick and viable integrated analysis, interpretation, and modelling.
Wilson, John T.; Mandell, Wayne A.; Paillet, Frederick L.; Bayless, E. Randall; Hanson, Randall T.; Kearl, Peter M.; Kerfoot, William B.; Newhouse, Mark W.; Pedler, William H.
2001-01-01
Three borehole flowmeters and hydrophysical logging were used to measure ground-water flow in carbonate bedrock at sites in southeastern Indiana and on the westcentral border of Kentucky and Tennessee. The three flowmeters make point measurements of the direction and magnitude of horizontal flow, and hydrophysical logging measures the magnitude of horizontal flowover an interval. The directional flowmeters evaluated include a horizontal heat-pulse flowmeter, an acoustic Doppler velocimeter, and a colloidal borescope flowmeter. Each method was used to measure flow in selected zones where previous geophysical logging had indicated water-producing beds, bedding planes, or other permeable features that made conditions favorable for horizontal-flow measurements. Background geophysical logging indicated that ground-water production from the Indiana test wells was characterized by inflow from a single, 20-foot-thick limestone bed. The Kentucky/Tennessee test wells produced water from one or more bedding planes where geophysical logs indicated the bedding planes had been enlarged by dissolution. Two of the three test wells at the latter site contained measurable vertical flow between two or more bedding planes under ambient hydraulic head conditions. Field measurements and data analyses for each flow-measurement technique were completed by a developer of the technology or by a contractor with extensive experience in the application of that specific technology. Comparison of the horizontal-flow measurements indicated that the three point-measurement techniques rarely measured the same velocities and flow directions at the same measurement stations. Repeat measurements at selected depth stations also failed to consistently reproduce either flow direction, flow magnitude, or both. At a few test stations, two of the techniques provided similar flow magnitude or direction but usually not both. Some of this variability may be attributed to naturally occurring changes in hydraulic conditions during the 1-month study period in August and September 1999. The actual velocities and flow directions are unknown; therefore, it is uncertain which technique provided the most accurate measurements of horizontal flow in the boreholes and which measurements were most representative of flow in the aquifers. The horizontal heat-pulse flowmeter consistently yielded flow magnitudes considerably less than those provided by the acoustic Doppler velocimeter and colloidal borescope. The design of the horizontal heat-pulse flowmeter compensates for the local acceleration of ground-water velocity in the open borehole. The magnitude of the velocities estimated from the hydrophysical logging were comparable to those of the horizontal heat-pulse flowmeter, presumably because the hydrophysical logging also effectively compensates for the effect of the borehole on the flow field and averages velocity over a length of borehole rather than at a point. The acoustic Doppler velocimeter and colloidal borescope have discrete sampling points that allow for measuring preferential flow velocities that can be substantially higher than the average velocity through a length of borehole. The acoustic Doppler velocimeter and colloidal borescope also measure flow at the center of the borehole where the acceleration of the flow field should be greatest. Of the three techniques capable of measuring direction and magnitude of horizontal flow, only the acoustic Doppler velocimeter measured vertical flow. The acoustic Doppler velocimeter consistently measured downward velocity in all test wells. This apparent downward flow was attributed, in part, to particles falling through the water column as a result of mechanical disturbance during logging. Hydrophysical logging yielded estimates of vertical flow in the Kentucky/Tennessee test wells. In two of the test wells, the hydrophysical logging involved deliberate isolation of water-producing bedding planes with a packer to ensure that small horizontal flow could be quantified without the presence of vertical flow. The presence of vertical flow in the Kentucky/Tennessee test wells may preclude the definitive measurement of horizontal flow without the use of effective packer devices. None of the point-measurement techniques used a packer, but each technique used baffle devices to help suppress the vertical flow. The effectiveness of these baffle devices is not known; therefore, the effect of vertical flow on the measurements cannot be quantified. The general lack of agreement among the point-measurement techniques in this study highlights the difficulty of using measurements at a single depth point in a borehole to characterize the average horizontal flow in a heterogeneous aquifer. The effective measurement of horizontal flow may depend on the precise depth at which measurements are made, and the measurements at a given depth may vary over time as hydraulic head conditions change. The various measurements also demonstrate that the magnitude and possibly the direction of horizontal flow are affected by the presence of the open borehole. Although there is a lack of agreement among the measurement techniques, these results could mean that effective characterization of horizontal flow in heterogeneous aquifers might be possible if data from many depth stations and from repeat measurements can be averaged over an extended time period. Complications related to vertical flow in the borehole highlights the importance of using background logging methods like vertical flowmeters or hydrophysical logging to characterize the borehole environment before horizontal-flow measurements are attempted. If vertical flow is present, a packer device may be needed to acquire definitive measurements of horizontal flow. Because hydrophysical logging provides a complete depth profile of the borehole, a strength of this technique is in identifying horizontal- and vertical-flow zones in a well. Hydrophysical logging may be most applicable as a screening method. Horizontal- flow zones identified with the hydrophysical logging then could be evaluated with one of the point-measurement techniques for quantifying preferential flow zones and flow directions. Additional research is needed to determine how measurements of flow in boreholes relate to flow in bedrock aquifers. The flowmeters may need to be evaluated under controlled laboratory conditions to determine which of the methods accurately measure ground-water velocities and flow directions. Additional research also is needed to investigate variations in flow direction with time, daily changes in velocity, velocity corrections for fractured bedrock aquifers and unconsolidated aquifers, and directional differences in individual wells for hydraulically separated flow zones.
NASA Astrophysics Data System (ADS)
Hofmann, A. W.
2006-12-01
Delta Niobium or Delta VICE? Niobium is one of a few chemical elements that can be used to discriminate between melts derived from upwelling mantle, represented by MORBs and OIBs, and those derived from subduction and continental crust environments. The Nb/U ratio was introduced because these two elements appear to partition nearly identically in upwelling environments, but very differently (from one another) in subduction and continental environments (Hofmann et al., 1986). Fitton et al. (1997, 2003) have taken a radically different approach, using log(Nb/Y)-log(Zr/Y) correlations that appear to discriminate between MORB and OIB (or plume) environments. MORB correlations are parallel to, and at lower Nb/Y ratios than, Iceland basalt correlations. This is expressed by a discrimination parameter defined as Delta Nb = 1.74 + log(Nb/Y) - 1.92 log(Zr/Y). N-MORB have negative Delta-Nb values, whereas Iceland and other OIBs have positive values. Fitton et al. interpret this in terms of a niobium deficiency in MORB that is balanced by a Nb excess in OIBs. This interpretation conflicts with evidence based on Nb/U ratios (Hofmann et al., 1986), that MORB and OIB are parts of a common reservoir, which is different from, and complementary to, the continental crust. Both parts of this MORB-OIB reservoir are characterized by higher-than-primitive Nb/U and Nb/Th ratios, whereas continental crust has dramatically lower Nb/U and Nb/Th ratios. The use of VICE/MICE (very-incompatible- element to moderately-incompatible-element) ratios, such as Nb/Y, obscures this. The significance of the VICE/MICE plot becomes clear if one replaces Nb by other VICEs in the log(Nb/Y)-log(Zr/Y) plot. This shows that any of these VICEs yield similar topologies as Nb/Y. Thus for a given Zr/Y ratio, depleted MORB have consistently lower Ba/Y, Th/Y, and La/Y ratios than do Iceland basalts, even the most incompatible-element- depleted Iceland picrites. This is caused by a less extreme depletion of Icelandic picrites (and tholeiites) in VICEs relative to Y, causing their spidergram patterns to be flatter than those of depleted MORB, which "drop off" more steeply. The important point is that there is no special Nb effect. The difference between Iceland-style and MORB-style depletion might therefore be called "Delta VICE" rather than "Delta Niobium." This assessment of Nb geochemistry is confirmed by new compilations for several MORB and OIB suites, which shows that in some of these suites, Nb and Th yield the most uniform ratios, whereas Nb/U is more uniform in others (including Iceland). Similarly, Ta is similar to Nb in some suites, whereas it is slightly more compatible than Nb in others. These slight regional differences are of minor consequence in the present context. The cause of the greater depletion in VICEs relative to MICEs in N-MORB compared with the most depleted Icelandic is most likely related to the roles of garnet and clinopyroxene during source depletion processes.
Barnard, R.W.; Jensen, D.H.
1980-11-05
Uranium formations are assayed by prompt fission neutron logging techniques. The uranium in the formation is proportional to the ratio of epithermal counts to thermal or epithermal dieaway. Various calibration factors enhance the accuracy of the measurement.
Hubble space telescope near-ultraviolet spectroscopy of the bright cemp-no star BD+44°493
DOE Office of Scientific and Technical Information (OSTI.GOV)
Placco, Vinicius M.; Beers, Timothy C.; Smith, Verne V.
2014-07-20
We present an elemental-abundance analysis, in the near-ultraviolet (NUV) spectral range, for the extremely metal-poor star BD+44°493 a ninth magnitude subgiant with [Fe/H] =–3.8 and enhanced carbon, based on data acquired with the Space Telescope Imaging Spectrograph on the Hubble Space Telescope. This star is the brightest example of a class of objects that, unlike the great majority of carbon-enhanced metal-poor (CEMP) stars, does not exhibit over-abundances of heavy neutron-capture elements (CEMP-no). In this paper, we validate the abundance determinations for a number of species that were previously studied in the optical region, and obtain strong upper limits for berylliummore » and boron, as well as for neutron-capture elements from zirconium to platinum, many of which are not accessible from ground-based spectra. The boron upper limit we obtain for BD+44°493, log ε (B) <–0.70, the first such measurement for a CEMP star, is the lowest yet found for very and extremely metal-poor stars. In addition, we obtain even lower upper limits on the abundances of beryllium, log ε (Be) <–2.3, and lead, log ε (Pb) <–0.23 ([Pb/Fe] <+1.90), than those reported by previous analyses in the optical range. Taken together with the previously measured low abundance of lithium, the very low upper limits on Be and B suggest that BD+44°493 was formed at a very early time, and that it could well be a bona-fide second-generation star. Finally, the Pb upper limit strengthens the argument for non-s-process production of the heavy-element abundance patterns in CEMP-no stars.« less
Rapid Microarray Detection of DNA and Proteins in Microliter Volumes with SPR Imaging Measurements
Seefeld, Ting Hu; Zhou, Wen-Juan; Corn, Robert M.
2011-01-01
A four chamber microfluidic biochip is fabricated for the rapid detection of multiple proteins and nucleic acids from microliter volume samples with the technique of surface plasmon resonance imaging (SPRI). The 18 mm × 18 mm biochip consists of four 3 μL microfluidic chambers attached to an SF10 glass substrate, each of which contains three individually addressable SPRI gold thin film microarray elements. The twelve element (4 × 3) SPRI microarray consists of gold thin film spots (1 mm2 area; 45 nm thickness) each in individually addressable 0.5 μL volume microchannels. Microarrays of single-stranded DNA and RNA (ssDNA and ssRNA respectively) are fabricated by either chemical and/or enzymatic attachment reactions in these microchannels; the SPRI microarrays are then used to detect femtomole amounts (nanomolar concentrations) of DNA and proteins (single stranded DNA binding protein and thrombin via aptamer-protein bioaffinity interactions). Microarrays of ssRNA microarray elements were also used for the ultrasensitive detection of zeptomole amounts (femtomolar concentrations) of DNA via the technique of RNase H-amplified SPRI. Enzymatic removal of ssRNA from the surface due to the hybridization adsorption of target ssDNA is detected as a reflectivity decrease in the SPR imaging measurements. The observed reflectivity loss was proportional to the log of the target ssDNA concentration with a detection limit of 10 fM or 30 zeptomoles (18,000 molecules). This enzymatic amplified ssDNA detection method is not limited by diffusion of ssDNA to the interface, and thus is extremely fast, requiring only 200 seconds in the microliter volume format. PMID:21488682
Al-Qadiri, Hamzah M; Ovissipour, Mahmoudreza; Al-Alami, Nivin; Govindan, Byju N; Shiroodi, Setareh Ghorban; Rasco, Barbara
2016-05-01
Bactericidal activity of neutral electrolyzed water (NEW), quaternary ammonium (QUAT), and lactic acid-based solutions was investigated using a manual spraying technique against Salmonella Typhimurium, Escherichia coli O157:H7, Campylobacter jejuni, Listeria monocytogenes and Staphylococcus aureus that were inoculated onto the surface of scarred polypropylene and wooden food cutting boards. Antimicrobial activity was also examined when using cutting boards in preparation of raw chopped beef, chicken tenders or salmon fillets. Viable counts of survivors were determined as log10 CFU/100 cm(2) within 0 (untreated control), 1, 3, and 5 min of treatment at ambient temperature. Within the first minute of treatment, NEW and QUAT solutions caused more than 3 log10 bacterial reductions on polypropylene surfaces whereas less than 3 log10 reductions were achieved on wooden surfaces. After 5 min of treatment, more than 5 log10 reductions were achieved for all bacterial strains inoculated onto polypropylene surfaces. Using NEW and QUAT solutions within 5 min reduced Gram-negative bacteria by 4.58 to 4.85 log10 compared to more than 5 log10 reductions in Gram-positive bacteria inoculated onto wooden surfaces. Lactic acid treatment was significantly less effective (P < 0.05) compared to NEW and QUAT treatments. A decline in antimicrobial effectiveness was observed (0.5 to <2 log10 reductions were achieved within the first minute) when both cutting board types were used to prepare raw chopped beef, chicken tenders or salmon fillets. © 2016 Institute of Food Technologists®
High-voltage supply for neutron tubes in well-logging applications
Humphreys, D.R.
1982-09-15
A high voltage supply is provided for a neutron tube used in well logging. The biased pulse supply of the invention combines DC and full pulse techniques and produces a target voltage comprising a substantial negative DC bias component on which is superimposed a pulse whose negative peak provides the desired negative voltage level for the neutron tube. The target voltage is preferably generated using voltage doubling techniques and employing a voltage source which generates bipolar pulse pairs having an amplitude corresponding to the DC bias level.
High voltage supply for neutron tubes in well logging applications
Humphreys, D. Russell
1989-01-01
A high voltage supply is provided for a neutron tube used in well logging. The "biased pulse" supply of the invention combines DC and "full pulse" techniques and produces a target voltage comprising a substantial negative DC bias component on which is superimposed a pulse whose negative peak provides the desired negative voltage level for the neutron tube. The target voltage is preferably generated using voltage doubling techniques and employing a voltage source which generates bipolar pulse pairs having an amplitude corresponding to the DC bias level.
Paillet, Frederick; Hite, Laura; Carlson, Matthew
1999-01-01
Time domain surface electromagnetic soundings, borehole induction logs, and other borehole logging techniques are used to construct a realistic model for the shallow subsurface hydraulic properties of unconsolidated sediments in south Florida. Induction logs are used to calibrate surface induction soundings in units of pore water salinity by correlating water sample specific electrical conductivity with the electrical conductivity of the formation over the sampled interval for a two‐layered aquifer model. Geophysical logs are also used to show that a constant conductivity layer model is appropriate for the south Florida study. Several physically independent log measurements are used to quantify the dependence of formation electrical conductivity on such parameters as salinity, permeability, and clay mineral fraction. The combined interpretation of electromagnetic soundings and induction logs was verified by logging three validation boreholes, confirming quantitative estimates of formation conductivity and thickness in the upper model layer, and qualitative estimates of conductivity in the lower model layer.
High-performance computing on GPUs for resistivity logging of oil and gas wells
NASA Astrophysics Data System (ADS)
Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.
2017-10-01
We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.
Multicriteria evaluation of simulated logging scenarios in a tropical rain forest.
Huth, Andreas; Drechsler, Martin; Köhler, Peter
2004-07-01
Forest growth models are useful tools for investigating the long-term impacts of logging. In this paper, the results of the rain forest growth model FORMIND were assessed by a multicriteria decision analysis. The main processes covered by FORMIND include tree growth, mortality, regeneration and competition. Tree growth is calculated based on a carbon balance approach. Trees compete for light and space; dying large trees fall down and create gaps in the forest. Sixty-four different logging scenarios for an initially undisturbed forest stand at Deramakot (Malaysia) were simulated. The scenarios differ regarding the logging cycle, logging method, cutting limit and logging intensity. We characterise the impacts with four criteria describing the yield, canopy opening and changes in species composition. Multicriteria decision analysis was used for the first time to evaluate the scenarios and identify the efficient ones. Our results plainly show that reduced-impact logging scenarios are more 'efficient' than the others, since in these scenarios forest damage is minimised without significantly reducing yield. Nevertheless, there is a trade-off between yield and achieving a desired ecological state of logged forest; the ecological state of the logged forests can only be improved by reducing yields and enlarging the logging cycles. Our study also demonstrates that high cutting limits or low logging intensities cannot compensate for the high level of damage caused by conventional logging techniques.
Two-dimensional finite element heat transfer model of softwood. Part II, Macrostructural effects
Hongmei Gu; John F. Hunt
2006-01-01
A two-dimensional finite element model was used to study the effects of structural features on transient heat transfer in softwood lumber with various orientations. Transient core temperature was modeled for lumber samples âcutâ from various locations within a simulated log. The effects of ring orientation, earlywood to latewood (E/L) ratio, and ring density were...
[Post-logging organic matter recovery in forest ecosystems of eastern Baikal region].
Vedrova, E F; Mukhortova, L V; Ivanov, V V; Krivobokov, L V; Boloneva, M V
2010-01-01
The dynamics of organic matter accumulated in the soil and main vegetation elements was analyzed for post-logging forest ecosystem succession series in eastern Baikal region. The phytomass was found to allocate up 63 and 50% of carbon in undisturbed Scots pine and fir stands, respectively. The post-logging phytomass contribution to the total carbon pool appeared to decrease down to 16% in Scots pine and 6% in fir stands. In Scots pine stands, carbon storage was determined to account for almost 70% of the initial carbon 60 years after logging. In 50- to 55-year-old fir stands, carbon recovered its initial pool only by 10%. Soil carbon recorded in recently logged Scots pine and fir sites appeared to be 5 and 16 times that accumulated in the phytomass, respectively. The ratio between phytomass carbon and soil organic matter recovered back to the prelogging level in Scots pine stands by the age of 50-60 years. While phytomass carbon also increased in fir stand of the same age, it did not reach the level of the control stand.
NASA Astrophysics Data System (ADS)
Kim, Taeyoun; Hwang, Seho; Jang, Seonghyung
2017-01-01
When finding the "sweet spot" of a shale gas reservoir, it is essential to estimate the brittleness index (BI) and total organic carbon (TOC) of the formation. Particularly, the BI is one of the key factors in determining the crack propagation and crushing efficiency for hydraulic fracturing. There are several methods for estimating the BI of a formation, but most of them are empirical equations that are specific to particular rock types. We estimated the mineralogical BI based on elemental capture spectroscopy (ECS) log and elastic BI based on well log data, and we propose a new method for predicting S-wave velocity (VS) using mineralogical BI and elastic BI. The TOC is related to the gas content of shale gas reservoirs. Since it is difficult to perform core analysis for all intervals of shale gas reservoirs, we make empirical equations for the Horn River Basin, Canada, as well as TOC log using a linear relation between core-tested TOC and well log data. In addition, two empirical equations have been suggested for VS prediction based on density and gamma ray log used for TOC analysis. By applying the empirical equations proposed from the perspective of BI and TOC to another well log data and then comparing predicted VS log with real VS log, the validity of empirical equations suggested in this paper has been tested.
Grumetto, Lucia; Russo, Giacomo; Barbato, Francesco
2016-08-01
The affinity indexes for phospholipids (log kW(IAM)) for 42 compounds were measured by high performance liquid chromatography (HPLC) on two different phospholipid-based stationary phases (immobilized artificial membrane, IAM), i.e., IAM.PC.MG and IAM.PC.DD2. The polar/electrostatic interaction forces between analytes and membrane phospholipids (Δlog kW(IAM)) were calculated as the differences between the experimental values of log kW(IAM) and those expected for isolipophilic neutral compounds having polar surface area (PSA) = 0. The values of passage through a porcine brain lipid extract (PBLE) artificial membrane for 36 out of the 42 compounds considered, measured by the so-called PAMPA-BBB technique, were taken from the literature (P0(PAMPA-BBB)). The values of blood-brain barrier (BBB) passage measured in situ, P0(in situ), for 38 out of the 42 compounds considered, taken from the literature, represented the permeability of the neutral forms on "efflux minimized" rodent models. The present work was aimed at verifying the soundness of Δlog kW(IAM) at describing the potential of passage through the BBB as compared to data achieved by the PAMPA-BBB technique. In a first instance, the values of log P0(PAMPA-BBB) (32 data points) were found significantly related to the n-octanol lipophilicity values of the neutral forms (log P(N)) (r(2) = 0.782) whereas no significant relationship (r(2) = 0.246) was found with lipophilicity values of the mixtures of ionized and neutral forms existing at the experimental pH 7.4 (log D(7.4)) as well as with either log kW(IAM) or Δlog kW(IAM) values. log P0(PAMPA-BBB) related moderately to log P0(in situ) values (r(2) = 0.604). The latter did not relate with either n-octanol lipophilicity indexes (log P(N) and log D(7.4)) or phospholipid affinity indexes (log kW(IAM)). In contrast, significant inverse linear relationships were observed between log P0(in situ) (38 data points) and Δlog kW(IAM) values for all the compounds but ibuprofen and chlorpromazine, which behaved as moderate outliers (r(2) = 0.656 and r(2) = 0.757 for values achieved on IAM.PC.MG and IAM.PC.DD2, respectively). Since log P0(in situ) refer to the "intrinsic permeability" of the analytes regardless their ionization degree, no correction for ionization of Δlog kW(IAM) values was needed. Furthermore, log P0(in situ) were found roughly linearly related to log BB values (i.e., the logarithm of the ratio brain concentration/blood concentration measured in vivo) for all the analytes but those predominantly present at the experimental pH 7.4 as anions. These results suggest that, at least for the data set considered, Δlog kW(IAM) parameters are more effective than log P0(PAMPA-BBB) at predicting log P0(in situ) values for all the analytes. Furthermore, ionization appears to affect differently, and much more markedly, BBB passage of acids (yielding anions) than that of the other ionizable compounds.
Gradually truncated log-normal in USA publicly traded firm size distribution
NASA Astrophysics Data System (ADS)
Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.
2007-03-01
We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.
NASA Astrophysics Data System (ADS)
Alizadeh, Bahram; Najjari, Saeid; Kadkhodaie-Ilkhchi, Ali
2012-08-01
Intelligent and statistical techniques were used to extract the hidden organic facies from well log responses in the Giant South Pars Gas Field, Persian Gulf, Iran. Kazhdomi Formation of Mid-Cretaceous and Kangan-Dalan Formations of Permo-Triassic Data were used for this purpose. Initially GR, SGR, CGR, THOR, POTA, NPHI and DT logs were applied to model the relationship between wireline logs and Total Organic Carbon (TOC) content using Artificial Neural Networks (ANN). The correlation coefficient (R2) between the measured and ANN predicted TOC equals to 89%. The performance of the model is measured by the Mean Squared Error function, which does not exceed 0.0073. Using Cluster Analysis technique and creating a binary hierarchical cluster tree the constructed TOC column of each formation was clustered into 5 organic facies according to their geochemical similarity. Later a second model with the accuracy of 84% was created by ANN to determine the specified clusters (facies) directly from well logs for quick cluster recognition in other wells of the studied field. Each created facies was correlated to its appropriate burial history curve. Hence each and every facies of a formation could be scrutinized separately and directly from its well logs, demonstrating the time and depth of oil or gas generation. Therefore potential production zone of Kazhdomi probable source rock and Kangan- Dalan reservoir formation could be identified while well logging operations (especially in LWD cases) were in progress. This could reduce uncertainty and save plenty of time and cost for oil industries and aid in the successful implementation of exploration and exploitation plans.
NASA Astrophysics Data System (ADS)
Keller, M. M.; d'Oliveira, M. N.; Takemura, C. M.; Vitoria, D.; Araujo, L. S.; Morton, D. C.
2012-12-01
Selective logging, the removal of several valuable timber trees per hectare, is an important land use in the Brazilian Amazon and may degrade forests through long term changes in structure, loss of forest carbon and species diversity. Similar to deforestation, the annual area affected by selected logging has declined significantly in the past decade. Nonetheless, this land use affects several thousand km2 per year in Brazil. We studied a 1000 ha area of the Antimary State Forest (FEA) in the State of Acre, Brazil (9.304 ○S, 68.281 ○W) that has a basal area of 22.5 m2 ha-1 and an above-ground biomass of 231 Mg ha-1. Logging intensity was low, approximately 10 to 15 m3 ha-1. We collected small-footprint airborne lidar data using an Optech ALTM 3100EA over the study area once each in 2010 and 2011. The study area contained both recent and older logging that used both conventional and technologically advanced logging techniques. Lidar return density averaged over 20 m-2 for both collection periods with estimated horizontal and vertical precision of 0.30 and 0.15 m. A relative density model comparing returns from 0 to 1 m elevation to returns in 1-5 m elevation range revealed the pattern of roads and skid trails. These patterns were confirmed by ground-based GPS survey. A GIS model of the road and skid network was built using lidar and ground data. We tested and compared two pattern recognition approaches used to automate logging detection. Both segmentation using commercial eCognition segmentation and a Frangi filter algorithm identified the road and skid trail network compared to the GIS model. We report on the effectiveness of these two techniques.
Schacht, Veronika J; Grant, Sharon C; Escher, Beate I; Hawker, Darryl W; Gaus, Caroline
2016-06-01
Partitioning of super-hydrophobic organic contaminants (SHOCs) to dissolved or colloidal materials such as surfactants can alter their behaviour by enhancing apparent aqueous solubility. Relevant partition constants are, however, challenging to quantify with reasonable accuracy. Partition constants to colloidal surfactants can be measured by introducing a polymer (PDMS) as third phase with known PDMS-water partition constant in combination with the mass balance approach. We quantified partition constants of PCBs and PCDDs (log KOW 5.8-8.3) between water and sodium dodecyl sulphate monomers (KMO) and micelles (KMI). A refined, recently introduced swelling-based polymer loading technique allowed highly precise (4.5-10% RSD) and fast (<24 h) loading of SHOCs into PDMS, and due to the miniaturisation of batch systems equilibrium was reached in <5 days for KMI and <3 weeks for KMO. SHOC losses to experimental surfaces were substantial (8-26%) in monomer solutions, but had a low impact on KMO (0.10-0.16 log units). Log KMO for PCDDs (4.0-5.2) were approximately 2.6 log units lower than respective log KMI, which ranged from 5.2 to 7.0 for PCDDs and 6.6-7.5 for PCBs. The linear relationship between log KMI and log KOW was consistent with more polar and moderately hydrophobic compounds. Apparent solubility increased with increasing hydrophobicity and was highest in micelle solutions. However, this solubility enhancement was also considerable in monomer solutions, up to 200 times for OCDD. Given the pervasive presence of surfactant monomers in typical field scenarios, these data suggest that low surfactant concentrations may be effective long-term facilitators for subsurface transport of SHOCs. Copyright © 2016 Elsevier Ltd. All rights reserved.
A proven record in changing attitudes about MWD logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cantrell, L.; Paxson, K.B.; Keyser, W.L.
1993-07-01
Measurement while drilling (MWD) logs for quantitative reservoir characterization were evaluated during drilling of Gulf of Mexico flexure trend projects, Kilauea (Green Canyon Blocks 6 and 50) and Tick (Garden Banks Block 189). Comparisons confirmed that MWD can be used as an accurate replacement for wireline logging when borehole size is not a limiting factor. Texaco MWD experience evolved from last resort' to primary formation evaluation logging, which resulted in rigtime and associated cost savings. Difficult wells are now drilled and evaluated with confidence, geopressure is safely monitored, conventional core interval tops are selected, and geologic interpretations and operational decisionsmore » are made before wells TD. This paper reviews the performance, accuracy, and limitations of the MWD systems and compares the results to standard geophysical well logging techniques. Four case histories are presented.« less
Empirical Mode Decomposition of Geophysical Well-log Data of Bombay Offshore Basin, Mumbai, India
NASA Astrophysics Data System (ADS)
Siddharth Gairola, Gaurav; Chandrasekhar, Enamundram
2016-04-01
Geophysical well-log data manifest the nonlinear behaviour of their respective physical properties of the heterogeneous subsurface layers as a function of depth. Therefore, nonlinear data analysis techniques must be implemented, to quantify the degree of heterogeneity in the subsurface lithologies. One such nonlinear data adaptive technique is empirical mode decomposition (EMD) technique, which facilitates to decompose the data into oscillatory signals of different wavelengths called intrinsic mode functions (IMF). In the present study EMD has been applied to gamma-ray log and neutron porosity log of two different wells: Well B and Well C located in the western offshore basin of India to perform heterogeneity analysis and compare the results with those obtained by multifractal studies of the same data sets. By establishing a relationship between the IMF number (m) and the mean wavelength associated with each IMF (Im), a heterogeneity index (ρ) associated with subsurface layers can be determined using the relation, Im=kρm, where 'k' is a constant. The ρ values bear an inverse relation with the heterogeneity of the subsurface: smaller ρ values designate higher heterogeneity and vice-versa. The ρ values estimated for different limestone payzones identified in the wells clearly show that Well C has higher degree of heterogeneity than Well B. This correlates well with the estimated Vshale values for the limestone reservoir zone showing higher shale content in Well C than Well B. The ρ values determined for different payzones of both wells will be used to quantify the degree of heterogeneity in different wells. The multifractal behaviour of each IMF of both the logs of both the wells will be compared with one another and discussed on the lines of their heterogeneity indices.
MID Plot: a new lithology technique. [Matrix identification plot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clavier, C.; Rust, D.H.
1976-01-01
Lithology interpretation by the Litho-Porosity (M-N) method has been used for years, but is evidently too cumbersome and ambiguous for widespread acceptance as a field technique. To set aside these objections, another method has been devised. Instead of the log-derived parameters M and N, the MID Plot uses quasi-physical quantities, (rho/sub ma/)/sub a/ and (..delta..t/sub ma/)/sub a/, as its porosity-independent variables. These parameters, taken from suitably scaled Neutron-Density and Sonic-Neutron crossplots, define a unique matrix mineral or mixture for each point on the logs. The matrix points on the MID Plot thus remain constant in spite of changes in mudmore » filtrate, porosity, or neutron tool types (all of which significantly affect the M-N Plot). This new development is expected to bring welcome relief in areas where lithology identification is a routine part of log analysis.« less
The Economics of Reduced Impact Logging in the American Tropics: A Review of Recent Initiatives
Frederick Boltz; Thomas P. Holmes; Douglas R. Carter
1999-01-01
Programs aimed at developing and implementing reduced-impact logging (RIL) techniques are currently underway in important forest regions of Latin America, given the importance of timber production in the American tropics to national and global markets. RIL efforts focus upon planning and extraction methods which lessen harvest impact on residual commercial timber...
A Clustering Methodology of Web Log Data for Learning Management Systems
ERIC Educational Resources Information Center
Valsamidis, Stavros; Kontogiannis, Sotirios; Kazanidis, Ioannis; Theodosiou, Theodosios; Karakos, Alexandros
2012-01-01
Learning Management Systems (LMS) collect large amounts of data. Data mining techniques can be applied to analyse their web data log files. The instructors may use this data for assessing and measuring their courses. In this respect, we have proposed a methodology for analysing LMS courses and students' activity. This methodology uses a Markov…
Nondestructive Methods for Detecting Defects in Softwood Logs
Kristin C. Schad; Daniel L. Schmoldt; Robert J. Ross
1996-01-01
Wood degradation and defects, such as voids and knots, affect the quality and processing time of lumber. The ability to detect internal defects in the log can save mills time and processing costs. In this study, we investigated three nondestructive evaluation techniques for detecting internal wood defects. Sound wave transmission, x-ray computed tomography, and impulse...
Techniques for the wheeled-skidder operator
Robert L. Hartman; Harry G. Gibson; Harry G. Gibson
1970-01-01
How much production a logger gets from a logging job may depend heavily on his skidder operators. They are key men on any logging job. This is one conclusion that forestry engineers at the USDA Forest Service's Forestry Sciences Laboratory at Morgantown, West Virginia, came to after studying the operation of u-heeled skidders in mountainous Appalachian terrain....
CT Imaging, Data Reduction, and Visualization of Hardwood Logs
Daniel L. Schmoldt
1996-01-01
Computer tomography (CT) is a mathematical technique that, combined with noninvasive scanning such as x-ray imaging, has become a powerful tool to nondestructively test materials prior to use or to evaluate materials prior to processing. In the current context, hardwood lumber processing can benefit greatly by knowing what a log looks like prior to initial breakdown....
Log amplifier with pole-zero compensation
Brookshier, W.
1985-02-08
A logarithmic amplifier circuit provides pole-zero compensation for improved stability and response time over 6-8 decades of input signal frequency. The amplifer circuit includes a first operational amplifier with a first feedback loop which includes a second, inverting operational amplifier in a second feedstock loop. The compensated output signal is provided by the second operational amplifier with the log elements, i.e., resistors, and the compensating capacitors in each of the feedback loops having equal values so that each break point is offset by a compensating break point or zero.
Log amplifier with pole-zero compensation
Brookshier, William
1987-01-01
A logarithmic amplifier circuit provides pole-zero compensation for improved stability and response time over 6-8 decades of input signal frequency. The amplifier circuit includes a first operational amplifier with a first feedback loop which includes a second, inverting operational amplifier in a second feedback loop. The compensated output signal is provided by the second operational amplifier with the log elements, i.e., resistors, and the compensating capacitors in each of the feedback loops having equal values so that each break point or pole is offset by a compensating break point or zero.
A systems approach to solder joint fatigue in spacecraft electronic packaging
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1991-01-01
Differential expansion induced fatigue resulting from temperature cycling is a leading cause of solder joint failures in spacecraft. Achieving high reliability flight hardware requires that each element of the fatigue issue be addressed carefully. This includes defining the complete thermal-cycle environment to be experienced by the hardware, developing electronic packaging concepts that are consistent with the defined environments, and validating the completed designs with a thorough qualification and acceptance test program. This paper describes a useful systems approach to solder fatigue based principally on the fundamental log-strain versus log-cycles-to-failure behavior of fatigue. This fundamental behavior has been useful to integrate diverse ground test and flight operational thermal-cycle environments into a unified electronics design approach. Each element of the approach reflects both the mechanism physics that control solder fatigue, as well as the practical realities of the hardware build, test, delivery, and application cycle.
NASA Astrophysics Data System (ADS)
Mansouri, E.; Feizi, F.; Karbalaei Ramezanali, A. A.
2015-07-01
Ground magnetic anomaly separation using reduction-to-the-pole (RTP) technique and the fractal concentration-area (C-A) method has been applied to the Qoja-Kandi prosepecting area in NW Iran. The geophysical survey that resulted in the ground magnetic data was conducted for magnetic elements exploration. Firstly, RTP technique was applied for recognizing underground magnetic anomalies. RTP anomalies was classified to different populations based on this method. For this reason, drilling points determination with RTP technique was complicated. Next, C-A method was applied on the RTP-Magnetic-Anomalies (RTP-MA) for demonstrating magnetic susceptibility concentration. This identification was appropriate for increasing the resolution of the drilling points determination and decreasing the drilling risk, due to the economic costs of underground prospecting. In this study, the results of C-A Modeling on the RTP-MA are compared with 8 borehole data. The results show there is good correlation between anomalies derived via C-A method and log report of boreholes. Two boreholes were drilled in magnetic susceptibility concentration, based on multifractal modeling data analyses, between 63 533.1 and 66 296 nT. Drilling results show appropriate magnetite thickness with the grades greater than 20 % Fe total. Also, anomalies associated with andesite units host iron mineralization.
Konaté, Ahmed Amara; Ma, Huolin; Pan, Heping; Qin, Zhen; Ahmed, Hafizullah Abba; Dembele, N'dji Dit Jacques
2017-10-01
The availability of a deep well that penetrates deep into the Ultra High Pressure (UHP) metamorphic rocks is unusual and consequently offers a unique chance to study the metamorphic rocks. One such borehole is located in the southern part of Donghai County in the Sulu UHP metamorphic belt of Eastern China, from the Chinese Continental Scientific Drilling Main hole. This study reports the results obtained from the analysis of oxide log data. A geochemical logging tool provides in situ, gamma ray spectroscopy measurements of major and trace elements in the borehole. Dry weight percent oxide concentration logs obtained for this study were SiO 2 , K 2 O, TiO 2 , H 2 O, CO 2 , Na 2 O, Fe 2 O 3 , FeO, CaO, MnO, MgO, P 2 O 5 and Al 2 O 3 . Cross plot and Principal Component Analysis methods were applied for lithology characterization and mineralogy description respectively. Cross plot analysis allows lithological variations to be characterized. Principal Component Analysis shows that the oxide logs can be summarized by two components related to the feldspar and hydrous minerals. This study has shown that geochemical logging tool data is accurate and adequate to be tremendously useful in UHP metamorphic rocks analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Standard weight (Ws) equations for four rare desert fishes
Didenko, A.V.; Bonar, Scott A.; Matter, W.J.
2004-01-01
Standard weight (Ws) equations have been used extensively to examine body condition in sport fishes. However, development of these equations for nongame fishes has only recently been emphasized. We used the regression-line-percentile technique to develop standard weight equations for four rare desert fishes: flannelmouth sucker Catostomus latipinnis, razorback sucker Xyrauchen texanus, roundtail chub Gila robusta, and humpback chub G. cypha. The Ws equation for flannelmouth suckers of 100-690 mm total length (TL) was developed from 17 populations: log10Ws = -5.180 + 3.068 log10TL. The Ws equation for razorback suckers of 110-885 mm TL was developed from 12 populations: log 10Ws = -4.886 + 2.985 log10TL. The W s equation for roundtail chub of 100-525 mm TL was developed from 20 populations: log10Ws = -5.065 + 3.015 log10TL. The Ws equation for humpback chub of 120-495 mm TL was developed from 9 populations: log10Ws = -5.278 + 3.096 log 10TL. These equations meet criteria for acceptable standard weight indexes and can be used to calculate relative weight, an index of body condition.
info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.
Defrance, Matthieu; van Helden, Jacques
2009-10-15
Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
On the Rapid Computation of Various Polylogarithmic Constants
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Peter; Plouffe, Simon
1996-01-01
We give algorithms for the computation of the d-th digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log(2) or pi on a modest workstation in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of pi, the billionth hexadecimal digits of pi-squared, log(2) and log-squared(2), and the ten billionth decimal digit of log(9/10). These calculations rest on the observation that very special types of identities exist for certain numbers like pi, pi-squared, log(2) and log-squared(2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we derive in this work appear to be new, for example a critical identity for pi.
Impact of Uncertainty on the Porous Media Description in the Subsurface Transport Analysis
NASA Astrophysics Data System (ADS)
Darvini, G.; Salandin, P.
2008-12-01
In the modelling of flow and transport phenomena in naturally heterogeneous media, the spatial variability of hydraulic properties, typically the hydraulic conductivity, is generally described by use of a variogram of constant sill and spatial correlation. While some analyses reported in the literature discuss of spatial inhomogeneity related to a trend in the mean hydraulic conductivity, the effect in the flow and transport due to an inexact definition of spatial statistical properties of media as far as we know had never taken into account. The relevance of this topic is manifest, and it is related to the uncertainty in the definition of spatial moments of hydraulic log-conductivity from an (usually) little number of data, as well as to the modelling of flow and transport processes by the Monte Carlo technique, whose numerical fields have poor ergodic properties and are not strictly statistically homogeneous. In this work we investigate the effects related to mean log-conductivity (logK) field behaviours different from the constant one due to different sources of inhomogeneity as: i) a deterministic trend; ii) a deterministic sinusoidal pattern and iii) a random behaviour deriving from the hierarchical sedimentary architecture of porous formations and iv) conditioning procedure on available measurements of the hydraulic conductivity. These mean log-conductivity behaviours are superimposed to a correlated weakly fluctuating logK field. The time evolution of the spatial moments of the plume driven by a statistically inhomogeneous steady state random velocity field is analyzed in a 2-D finite domain by taking into account different sizes of injection area. The problem is approached by both a classical Monte Carlo procedure and SFEM (stochastic finite element method). By the latter the moments are achieved by space-time integration of the velocity field covariance structure derived according to the first- order Taylor series expansion. Two different goals are foreseen: 1) from the results it will be possible to distinguish the contribute in the plume dispersion of the uncertainty in the statistics of the medium hydraulic properties in all the cases considered, and 2) we will try to highlight the loss of performances that seems to affect the first-order approaches in the transport phenomena that take place in hierarchical architecture of porous formations.
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Mercury accumulation in periphyton of eight river ecosystems
Bell, A.H.; Scudder, B.C.
2007-01-01
In 2003, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) program and U.S. Environmental Protection Agency studied total mercury (THg) and methylmercury (MeHg) concentrations in periphyton at eight rivers in the United States in coordination with a larger USGS study on mercury cycling in rivers. Periphyton samples were collected using trace element clean techniques and NAWQA sampling protocols in spring and fall from targeted habitats (streambed surface-sediment, cobble, or woody snags) at each river site. A positive correlation was observed between concentrations of THg and MeHg in periphyton (r2 = 0.88, in log-log space). Mean MeHg and THg concentrations in surface-sediment periphyton were significantly higher (1,333 ng/m2 for MeHg and 53,980 ng/m2 for THg) than cobble (64 ng/m2 for MeHg and 1,192 ng/m2 for THg) or woody snag (71 ng/m2 for MeHg and 1,089 ng/m2 for THg) periphyton. Concentrations of THg in surface-sediment periphyton had a strong positive correlation with concentrations of THg in sediment (dry weight). The ratio of MeHg:THg in surface-sediment periphyton increased with the ratio of MeHg:THg in sediment. These data suggest periphyton may play a key role in mercury bioaccumulation in river ecosystems. ?? 2007 American Water Resources Association.
Cheng, Jinjin; Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang
2015-01-01
The effects of soil rare earth element (REE) on navel orange quality and safety in rare earth ore areas have gained great attention. This study investigated the transfer characteristics of REE from soil to navel orange pulp (Citrus sinensis Osbeck cv. Newhall) and examined the effects of soil REE on internal fruit quality in Xinfeng County, Jiangxi province, China. Path analysis showed that soil REE, pH, cation exchange capacity (CEC), and Fe oxide (Feox) significantly affected pulp REE concentrations. A Freundlich-type prediction model for pulp REE was established: log[REEpulp] = -1.036 + 0.272 log[REEsoil] - 0.056 pH - 0.360 log[CEC] + 0.370 log[Feox] (n = 114, R2 = 0.60). From the prediction model, it was inferred that even when soil REE and Feox were as high as 1038 mg kg-1 and 96.4 g kg-1, respectively, and pH and CEC were as low as 3.75 and 5.08 cmol kg-1, respectively, pulp REE concentrations were much lower than the food limit standard. Additionally, soil REE levels were significantly correlated with selected fruit quality indicators, including titratable acidity (r = 0.52, P < 0.01), total soluble solids (r = 0.48, P < 0.01) and vitamin C (r = 0.56, P < 0.01). Generally, under routine methods of water and fertilization management, the cultivation of navel oranges in rare earth ore areas of south China with soil REE ranging from 38.6 to 546 mg kg-1 had improved in internal fruit quality.
Cheng, Jinjin; Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang
2015-01-01
The effects of soil rare earth element (REE) on navel orange quality and safety in rare earth ore areas have gained great attention. This study investigated the transfer characteristics of REE from soil to navel orange pulp (Citrus sinensis Osbeck cv. Newhall) and examined the effects of soil REE on internal fruit quality in Xinfeng County, Jiangxi province, China. Path analysis showed that soil REE, pH, cation exchange capacity (CEC), and Fe oxide (Feox) significantly affected pulp REE concentrations. A Freundlich-type prediction model for pulp REE was established: log[REEpulp] = -1.036 + 0.272 log[REEsoil] - 0.056 pH - 0.360 log[CEC] + 0.370 log[Feox] (n = 114, R2 = 0.60). From the prediction model, it was inferred that even when soil REE and Feox were as high as 1038 mg kg-1 and 96.4 g kg-1, respectively, and pH and CEC were as low as 3.75 and 5.08 cmol kg-1, respectively, pulp REE concentrations were much lower than the food limit standard. Additionally, soil REE levels were significantly correlated with selected fruit quality indicators, including titratable acidity (r = 0.52, P < 0.01), total soluble solids (r = 0.48, P < 0.01) and vitamin C (r = 0.56, P < 0.01). Generally, under routine methods of water and fertilization management, the cultivation of navel oranges in rare earth ore areas of south China with soil REE ranging from 38.6 to 546 mg kg-1 had improved in internal fruit quality. PMID:25806821
An analysis of production and costs in high-lead yarding.
Magnus E. Tennas; Robert H. Ruth; Carl M. Berntsen
1955-01-01
In recent years loggers and timber owners have needed better information for estimating logging costs in the Douglas-fir region. Brandstrom's comprehensive study, published in 1933 (1), has long been used as a guide in making cost estimates. But the use of new equipment and techniques and an overall increase in logging costs have made it increasingly difficult to...
Using nonlinear quantile regression to estimate the self-thinning boundary curve
Quang V. Cao; Thomas J. Dean
2015-01-01
The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...
Heavy-Element Abundances in Blue Compact Galaxies
NASA Astrophysics Data System (ADS)
Izotov, Yuri I.; Thuan, Trinh X.
1999-02-01
We present high-quality ground-based spectroscopic observations of 54 supergiant H II regions in 50 low-metallicity blue compact galaxies with oxygen abundances 12+logO/H between 7.1 and 8.3. We use the data to determine abundances for the elements N, O, Ne, S, Ar, and Fe. We also analyze Hubble Space Telescope (HST) Faint Object Spectrograph archival spectra of 10 supergiant H II regions to derive C and Si abundances in a subsample of seven BCGs. The main result of the present study is that none of the heavy element-to-oxygen abundance ratios studied here (C/O, N/O, Ne/O, Si/O, S/O, Ar/O, Fe/O) depend on oxygen abundance for BCGs with 12+logO/H<=7.6 (Z<=Zsolar/20). This constancy implies that all of these heavy elements have a primary origin and are produced by the same massive (M>=10 Msolar) stars responsible for O production. The dispersion of the ratios C/O and N/O in these galaxies is found to be remarkably small, being only +/-0.03 and +/-0.02 dex, respectively. This very small dispersion is strong evidence against any time-delayed production of C and primary N in the lowest metallicity BCGs (secondary N production is negligible at these low metallicities). The absence of a time-delayed production of C and N is consistent with the scenario that galaxies with 12+logO/H<=7.6 are now undergoing their first burst of star formation, and that they are therefore young, with ages not exceeding 40 Myr. If very low metallicity BCGs are indeed young, this would argue against the commonly held belief that C and N are produced by intermediate-mass (3 Msolar<=M<=9 Msolar) stars at very low metallicities, as these stars would not have yet completed their evolution in these lowest metallicity galaxies. In higher metallicity BCGs (7.6<12+logO/H<8.2), the abundance ratios Ne/O, Si/O, S/O, Ar/O, and Fe/O retain the same constant value they had at lower metallicities. By contrast, there is an increase of C/O and N/O along with their dispersions at a given O. We interpret this increase as due to the additional contribution of C and primary N production in intermediate-mass stars, on top of that by high-mass stars. The above results lead to the following timeline for galaxy evolution: (1) all objects with 12+logO/H<=7.6 began to form stars less than 40 Myr ago; (2) after 40 Myr, all galaxies have evolved so that 12+logO/H>7.6 (3) by the time intermediate-mass stars have evolved and released their nucleosynthetic products (100-500 Myr), all galaxies have become enriched to 7.6<12+logO/H<8.2. The delayed release of primary N at these metallicities greatly increases the scatter in N/O; (4) later, when galaxies get enriched to 12+logO/H>8.2, secondary N production becomes important. BCGs show the same O/Fe overabundance with respect to the Sun (~0.4 dex) as Galactic halo stars, suggesting the same chemical enrichment history. We compare heavy elements yields derived from the observed abundance ratios with theoretical yields for massive stars and find general good agreement. However, the theoretical models are unable to reproduce the observed N/O and Fe/O. Further theoretical developments are necessary, in particular to solve the problem of primary nitrogen production in low-metallicity massive stars. We discuss the apparent discrepancy between abundance ratios N/O measured in BCGs and those in high-redshift damped Lyα galaxies, which are up to 1 order of magnitude smaller. We argue that this large discrepancy may arise from the unknown physical conditions of the gas responsible for the metallic absorption lines in high-redshift damped Lyα systems. While it is widely assumed that the absorbing gas is neutral, we propose that it could be ionized. In this case, ionization correction factors can boost N/O in damped Lyα galaxies into the range of those measured in BCGs.
Compacting biomass waste materials for use as fuel
NASA Astrophysics Data System (ADS)
Zhang, Ou
Every year, biomass waste materials are produced in large quantity. The combustibles in biomass waste materials make up over 70% of the total waste. How to utilize these waste materials is important to the nation and the world. The purpose of this study is to test optimum processes and conditions of compacting a number of biomass waste materials to form a densified solid fuel for use at coal-fired power plants or ordinary commercial furnaces. Successful use of such fuel as a substitute for or in cofiring with coal not only solves a solid waste disposal problem but also reduces the release of some gases from burning coal which cause health problem, acid rain and global warming. The unique punch-and-die process developed at the Capsule Pipeline Research Center, University of Missouri-Columbia was used for compacting the solid wastes, including waste paper, plastics (both film and hard products), textiles, leaves, and wood. The compaction was performed to produce strong compacts (biomass logs) under room temperature without binder and without preheating. The compaction conditions important to the commercial production of densified biomass fuel logs, including compaction pressure, pressure holding time, back pressure, moisture content, particle size, binder effects, and mold conditions were studied and optimized. The properties of the biomass logs were evaluated in terms of physical, mechanical, and combustion characteristics. It was found that the compaction pressure and the initial moisture content of the biomass material play critical roles in producing high-quality biomass logs. Under optimized compaction conditions, biomass waste materials can be compacted into high-quality logs with a density of 0.8 to 1.2 g/cm3. The logs made from the combustible wastes have a heating value in the range 6,000 to 8,000 Btu/lb which is only slightly (10 to 30%) less than that of subbituminous coal. To evaluate the feasibility of cofiring biomass logs with coal, burn tests were conducted in a stoke boiler. A separate burning test was also carried out by burning biomass logs alone in an outdoor hot-water furnace for heating a building. Based on a previous coal compaction study, the process of biomass compaction was studied numerically by use of a non-linear finite element code. A constitutive model with sufficient generality was adapted for biomass material to deal with pore contraction during compaction. A contact node algorithm was applied to implement the effect of mold wall friction into the finite element program. Numerical analyses were made to investigate the pressure distribution in a die normal to the axis of compaction, and to investigate the density distribution in a biomass log after compaction. The results of the analyses gave generally good agreement with theoretical analysis of coal log compaction, although assumptions had to be made about the variation in the elastic modulus of the material and the Poisson's ratio during the compaction cycle.
Chemical composition of δ Scuti stars: 1. AO CVn, CP Boo, KW Aur
NASA Astrophysics Data System (ADS)
Galeev, A. I.; Ivanova, D. V.; Shimansky, V. V.; Bikmaev, I. F.
2012-11-01
We used high-resolution echelle spectra acquired with the 1.5-m Russian-Turkish Telescope to determine the fundamental atmospheric parameters and abundances of 30 chemical elements for three δ Scuti stars: AOCVn, CP Boo, and KWAur. The chemical compositions we find for these stars are similar to those for Am-star atmospheres, though some anomalies of up to 0.6-0.7 dex are observed for light and heavy elements. We consider the effect of the adopted stellar parameters (effective temperature, log g, microturbulent velocity) and the amplitude of pulsational variations on the derived elemental abundances.
Transverse vibration techniques : logs to structural systems
Robert J. Ross
2008-01-01
Transverse vibration as a nondestructive testing and evaluation technique was first examined in the early 1960s. Initial research and development efforts focused on clear wood, lumber, and laminated products. Out of those efforts, tools were developed that are used today to assess lumber properties. Recently, use of this technique has been investigated for evaluating a...
Dale R. Waddell; Michael B. Lambert; W.Y. Pong
1984-01-01
The performance of the Bergstrom xylodensimeter, designed to measure the green density of wood, was investigated and compared with a technique that derived green densities from wood disk samples. In addition, log and bole weights of old-growth Douglas-fir and western hemlock were calculated by various formulas and compared with lifted weights measured with a load cell...
ERIC Educational Resources Information Center
Treves, Richard; Viterbo, Paolo; Haklay, Mordechai
2015-01-01
Research into virtual field trips (VFTs) started in the 1990s but, only recently, the maturing technology of devices and networks has made them viable options for educational settings. By considering an experiment, the learning benefits of logging the movement of students within a VFT are shown. The data are visualized by two techniques:…
Assessing wood quality of borer-infested red oak logs with a resonance acoustic technique
Xiping Wang; Henry E. Stelzer; Jan Wiedenbeck; Patricia K. Lebow; Robert J. Ross
2009-01-01
Large numbers of black oak (Quercus velutina Lam.) and scarlet oak (Quercus coccinea Muenchh.) trees are declining and dying in the Missouri Ozark forest as a result of oak decline. Red oak borer-infested trees produce low-grade logs that become extremely difficult to merchandize as the level of insect attack increases. The objective of this study was to investigate...
ERIC Educational Resources Information Center
Cho, Moon-Heum; Yoo, Jin Soung
2017-01-01
Many researchers who are interested in studying students' online self-regulated learning (SRL) have heavily relied on self-reported surveys. Data mining is an alternative technique that can be used to discover students' SRL patterns from large data logs saved on a course management system. The purpose of this study was to identify students' online…
Cable yarding residue after thinning young stands: a break-even simulation
Chris B. LeDoux
1984-01-01
The use of cable logging to extract small pieces of residue wood may result in low rates of production and a high cost per unit of wood produced. However, the logging manager can improve yarding productivity and break even in cable residue removal operations by using the proper planning techniques. In this study, breakeven zones for specific young-growth stands were...
Environmental corrections of a dual-induction logging while drilling tool in vertical wells
NASA Astrophysics Data System (ADS)
Kang, Zhengming; Ke, Shizhen; Jiang, Ming; Yin, Chengfang; Li, Anzong; Li, Junjian
2018-04-01
With the development of Logging While Drilling (LWD) technology, dual-induction LWD logging is not only widely applied in deviated wells and horizontal wells, but it is used commonly in vertical wells. Accordingly, it is necessary to simulate the response of LWD tools in vertical wells for logging interpretation. In this paper, the investigation characteristics, the effects of the tool structure, skin effect and drilling environment of a dual-induction LWD tool are simulated by the three-dimensional (3D) finite element method (FEM). In order to closely simulate the actual situation, real structure of the tool is taking into account. The results demonstrate that the influence of the background value of the tool structure can be eliminated. The values of deducting the background of a tool structure and analytical solution have a quantitative agreement in homogeneous formations. The effect of measurement frequency could be effectively eliminated by chart of skin effect correction. In addition, the measurement environment, borehole size, mud resistivity, shoulder bed, layer thickness and invasion, have an effect on the true resistivity. To eliminate these effects, borehole correction charts, shoulder bed correction charts and tornado charts are computed based on real tool structure. Based on correction charts, well logging data can be corrected automatically by a suitable interpolation method, which is convenient and fast. Verified with actual logging data in vertical wells, this method could obtain the true resistivity of formation.
Investigation of methods and approaches for collecting and recording highway inventory data.
DOT National Transportation Integrated Search
2013-06-01
Many techniques for collecting highway inventory data have been used by state and local agencies in the U.S. These : techniques include field inventory, photo/video log, integrated GPS/GIS mapping systems, aerial photography, satellite : imagery, vir...
Flood frequency analysis using optimization techniques : final report.
DOT National Transportation Integrated Search
1992-10-01
this study consists of three parts. In the first part, a comprehensive investigation was made to find an improved estimation method for the log-Pearson type 3 (LP3) distribution by using optimization techniques. Ninety sets of observed Louisiana floo...
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul
2018-04-01
We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.
"Geo-statistics methods and neural networks in geophysical applications: A case study"
NASA Astrophysics Data System (ADS)
Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.
2008-12-01
The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.
Paillet, Frederick L.; Hess, Alfred E.
1995-01-01
Two relatively new geophysical logging techniques, the digitally enhanced borehole acoustic televiewer and the heat-pulse flowmeter, were tested from 1987 to 1991 at two sites in Hawaii: Waipahu on the island of Oahu, and Pahoa on the island of Hawaii. Although these data were obtained in an effort to test and improve these two logging techniques, the measurements are of interest to hydrologists studying the aquifers in Hawaii. This report presents a review of the measurements conducted during this effort and summarizes the data obtained in a form designed to make that data available to hydrologists studying the movement of ground water in Hawaiian aquifers. Caliper logs obtained at the Waipahu site indicate the distribution of openings in interbed clinker zones between relatively dense and impermeable basalt flows. The flowmeter data indicate the pattern of flow induced along seven observation boreholes that provide conduits between interbed zones in the vicinity of the Mahoe Pumping Station at the Waipahu site. The televiewer image logs obtained in some of the Waipahu Mahoe boreholes do not show any significant vertical or steeply dipping fractures that might allow communication across the dense interior of basalt flows. Acoustic televiewer logs obtained at the Pahoa site show that a number of steeply dipping fractures and dikes cut across basalt flows. Although flow under ambient hydraulic-head conditions in the Waipahu Mahoe Observation boreholes is attributed to hydraulic gradients associated with pumping from a nearby pumping station, flow in the Waipio Deep Observation borehole on Oahu and flow in the Scientific Observation borehole on Hawaii are attributed to the effects of natural recharge and downward decreasing hydraulic heads associated with that recharge.
Barreto, Jackson; Barboni, Mirella T S; Feitosa-Santana, Claudia; Sato, João R; Bechara, Samir J; Ventura, Dora F; Alves, Milton Ruiz
2010-08-01
To compare intraocular straylight measurements and contrast sensitivity after wavefront-guided LASIK (WFG LASIK) in one eye and wavefront-guided photorefractive keratectomy (WFG PRK) in the fellow eye for myopia and myopic astigmatism correction. A prospective, randomized study of 22 eyes of 11 patients who underwent simultaneous WFG LASIK and WFG PRK (contralateral eye). Both groups were treated with the NIDEK Advanced Vision Excimer Laser System, and a microkeratome was used for flap creation in the WFG LASIK group. High and low contrast visual acuity, wavefront analysis, contrast sensitivity, and retinal straylight measurements were performed preoperatively and at 3, 6, and 12 months postoperatively. A third-generation straylight meter, C-Quant (Oculus Optikgeräte GmbH), was used for measuring intraocular straylight. Twelve months postoperatively, mean uncorrected distance visual acuity was -0.06 +/- 0.07 logMAR in the WFG LASIK group and -0.10 +/- 0.10 logMAR in the WFG PRK group. Mean preoperative intraocular straylight was 0.94 +/- 0.12 logs for the WFG LASIK group and 0.96 +/- 0.11 logs for the WFG PRK group. After 12 months, the mean straylight value was 1.01 +/- 0.1 log s for the WFG LASIK group and 0.97 +/- 0.12 log s for the WFG PRK group. No difference was found between techniques after 12 months (P = .306). No significant difference in photopic and mesopic contrast sensitivity between groups was noted. Intraocular straylight showed no statistically significant increase 1 year after WFG LASIK and WFG PRK. Higher order aberrations increased significantly after surgery for both groups. Nevertheless, WFG LASIK and WFG PRK yielded excellent visual acuity and contrast sensitivity performance without significant differences between techniques.
Accoustic waveform logging--Advances in theory and application
Paillet, F.L.; Cheng, C.H.; Pennington , W.D.
1992-01-01
Full-waveform acoustic logging has made significant advances in both theory and application in recent years, and these advances have greatly increased the capability of log analysts to measure the physical properties of formations. Advances in theory provide the analytical tools required to understand the properties of measured seismic waves, and to relate those properties to such quantities as shear and compressional velocity and attenuation, and primary and fracture porosity and permeability of potential reservoir rocks. The theory demonstrates that all parts of recorded waveforms are related to various modes of propagation, even in the case of dipole and quadrupole source logging. However, the theory also indicates that these mode properties can be used to design velocity and attenuation picking schemes, and shows how source frequency spectra can be selected to optimize results in specific applications. Synthetic microseismogram computations are an effective tool in waveform interpretation theory; they demonstrate how shear arrival picks and mode attenuation can be used to compute shear velocity and intrinsic attenuation, and formation permeability for monopole, dipole and quadrupole sources. Array processing of multi-receiver data offers the opportunity to apply even more sophisticated analysis techniques. Synthetic microseismogram data is used to illustrate the application of the maximum-likelihood method, semblance cross-correlation, and Prony's method analysis techniques to determine seismic velocities and attenuations. The interpretation of acoustic waveform logs is illustrated by reviews of various practical applications, including synthetic seismogram generation, lithology determination, estimation of geomechanical properties in situ, permeability estimation, and design of hydraulic fracture operations.
Sweep visually evoked potentials and visual findings in children with West syndrome.
de Freitas Dotto, Patrícia; Cavascan, Nívea Nunes; Berezovsky, Adriana; Sacai, Paula Yuri; Rocha, Daniel Martins; Pereira, Josenilson Martins; Salomão, Solange Rios
2014-03-01
West syndrome (WS) is a type of early childhood epilepsy characterized by progressive neurological development deterioration that includes vision. To demonstrate the clinical importance of grating visual acuity thresholds (GVA) measurement by sweep visually evoked potentials technique (sweep-VEP) as a reliable tool for evaluation of the visual cortex status in WS children. This is a retrospective study of the best-corrected binocular GVA and ophthalmological features of WS children referred for the Laboratory of Clinical Electrophysiology of Vision of UNIFESP from 1998 to 2012 (Committee on Ethics in Research of UNIFESP n° 0349/08). The GVA deficit was calculated by subtracting binocular GVA score (logMAR units) of each patient from the median values of age norms from our own lab and classified as mild (0.1-0.39 logMAR), moderate (0.40-0.80 logMAR) or severe (>0.81 logMAR). Associated ophthalmological features were also described. Data from 30 WS children (age from 6 to 108 months, median = 14.5 months, mean ± SD = 22.0 ± 22.1 months; 19 male) were analyzed. The majority presented severe GVA deficit (0.15-1.44 logMAR; mean ± SD = 0.82 ± 0.32 logMAR; median = 0.82 logMAR), poor visual behavior, high prevalence of strabismus and great variability in ocular positioning. The GVA deficit did not vary according to gender (P = .8022), WS type (P = .908), birth age (P = .2881), perinatal oxygenation (P = .7692), visual behavior (P = .8789), ocular motility (P = .1821), nystagmus (P = .2868), risk of drug-induced retinopathy (P = .4632) and participation in early visual stimulation therapy (P = .9010). The sweep-VEP technique is a reliable tool to classify visual system impairment in WS children, in agreement with the poor visual behavior exhibited by them. Copyright © 2013 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Vermeulen, Roel; Coble, Joseph B; Yereb, Daniel; Lubin, Jay H; Blair, Aaron; Portengen, Lützen; Stewart, Patricia A; Attfield, Michael; Silverman, Debra T
2010-10-01
Diesel exhaust (DE) has been implicated as a potential lung carcinogen. However, the exact components of DE that might be involved have not been clearly identified. In the past, nitrogen oxides (NO(x)) and carbon oxides (CO(x)) were measured most frequently to estimate DE, but since the 1990s, the most commonly accepted surrogate for DE has been elemental carbon (EC). We developed quantitative estimates of historical exposure levels of respirable elemental carbon (REC) for an epidemiologic study of mortality, particularly lung cancer, among diesel-exposed miners by back-extrapolating 1998-2001 REC exposure levels using historical measurements of carbon monoxide (CO). The choice of CO was based on the availability of historical measurement data. Here, we evaluated the relationship of REC with CO and other current and historical components of DE from side-by-side area measurements taken in underground operations of seven non-metal mining facilities. The Pearson correlation coefficient of the natural log-transformed (Ln)REC measurements with the Ln(CO) measurements was 0.4. The correlation of REC with the other gaseous, organic carbon (OC), and particulate measurements ranged from 0.3 to 0.8. Factor analyses indicated that the gaseous components, including CO, together with REC, loaded most strongly on a presumed 'Diesel exhaust' factor, while the OC and particulate agents loaded predominantly on other factors. In addition, the relationship between Ln(REC) and Ln(CO) was approximately linear over a wide range of REC concentrations. The fact that CO correlated with REC, loaded on the same factor, and increased linearly in log-log space supported the use of CO in estimating historical exposure levels to DE.
Choi, Ung-Kyu; Kim, Mi-Hyang; Lee, Nan-Hee
2007-11-01
This study was conducted to find the optimum extraction condition of Gold-Thread for antibacterial activity against Streptococcus mutans using The evolutionary operation-factorial design technique. Higher antibacterial activity was achieved in a higher extraction temperature (R2 = -0.79) and in a longer extraction time (R2 = -0.71). Antibacterial activity was not affected by differentiation of the ethanol concentration in the extraction solvent (R2 = -0.12). The maximum antibacterial activity of clove against S. mutans determined by the EVOP-factorial technique was obtained at 80 degrees C extraction temperature, 26 h extraction time, and 50% ethanol concentration. The population of S. mutans decreased from 6.110 logCFU/ml in the initial set to 4.125 logCFU/ml in the third set.
QUANTIFICATION OF IN-SITU GAS HYDRATES WITH WELL LOGS.
Collett, Timothy S.; Godbole, Sanjay P.; Economides, Christine
1984-01-01
This study evaluates in detail the expected theoretical log responses and the actual log responses within one stratigraphically controlled hydrate horizon in six wells spaced throughout the Kuparuk Oil Field. Detailed examination of the neutron porosity and sonic velocity responses within the horizon is included. In addition, the theoretical effect of the presence of hydrates on the neutron porosity and sonic velocity devices has been examined in order to correct for such an effect on the calculation of formation properties such as porosity and hydrate saturation. Also presented in the paper is a technique which allows the conclusive identification of a potential hydrate occurrence.
Asfahani, J; Ahmad, Z; Ghani, B Abdul
2018-07-01
An approach based on self organizing map (SOM) artificial neural networks is proposed herewith oriented towards interpreting nuclear and electrical well logging data. The well logging measurements of Kodana well in Southern Syria have been interpreted by applying the proposed approach. Lithological cross-section model of the basaltic environment has been derived and four different kinds of basalt have been consequently distinguished. The four basalts are hard massive basalt, hard basalt, pyroclastic basalt and the alteration basalt products- clay. The results obtained by SOM artificial neural networks are in a good agreement with the previous published results obtained by other different techniques. The SOM approach is practiced successfully in the case study of the Kodana well logging data, and can be therefore recommended as a suitable and effective approach for handling huge well logging data with higher number of variables required for lithological discrimination purposes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Borehole petrophysical chemostratigraphy of Pennsylvanian black shales in the Kansas subsurface
Doveton, J.H.; Merriam, D.F.
2004-01-01
Pennsylvanian black shales in Kansas have been studied on outcrop for decades as the core unit of the classic Midcontinent cyclothem. These shales appear to be highstand condensed sections in the sequence stratigraphic paradigm. Nuclear log suites provide several petrophysical measurements of rock chemistry that are a useful data source for chemostratigraphic studies of Pennsylvanian black shales in the subsurface. Spectral gamma-ray logs partition natural radioactivity between contributions by U, Th, and K sources. Elevated U contents in black shales can be related to reducing depositional environments, whereas the K and Th contents are indicators of clay-mineral abundance and composition. The photoelectric factor log measurement is a direct function of aggregate atomic number and so is affected by clay-mineral volume, clay-mineral iron content, and other black shale compositional elements. Neutron porosity curves are primarily a response to hydrogen content. Although good quality logs are available for many black shales, borehole washout features invalidate readings from the nuclear contact devices, whereas black shales thinner than tool resolution will be averaged with adjacent beds. Statistical analysis of nuclear log data between black shales in successive cyclothems allows systematic patterns of their chemical and petrophysical properties to be discriminated in both space and time. ?? 2004 Elsevier B.V. All rights reserved.
Designing from minimum to optimum functionality
NASA Astrophysics Data System (ADS)
Bannova, Olga; Bell, Larry
2011-04-01
This paper discusses a multifaceted strategy to link NASA Minimal Functionality Habitable Element (MFHE) requirements to a compatible growth plan; leading forward to evolutionary, deployable habitats including outpost development stages. The discussion begins by reviewing fundamental geometric features inherent in small scale, vertical and horizontal, pressurized module configuration options to characterize applicability to meet stringent MFHE constraints. A proposed scenario to incorporate a vertical core MFHE concept into an expanded architecture to provide continuity of structural form and a logical path from "minimum" to "optimum" design of a habitable module. The paper describes how habitation and logistics accommodations could be pre-integrated into a common Hab/Log Module that serves both habitation and logistics functions. This is offered as a means to reduce unnecessary redundant development costs and to avoid EVA-intensive on-site adaptation and retrofitting requirements for augmented crew capacity. An evolutionary version of the hard shell Hab/Log design would have an expandable middle section to afford larger living and working accommodations. In conclusion, the paper illustrates that a number of cargo missions referenced for NASA's 4.0.0 Lunar Campaign Scenario could be eliminated altogether to expedite progress and reduce budgets. The plan concludes with a vertical growth geometry that provides versatile and efficient site development opportunities using a combination of hard Hab/Log modules and a hybrid expandable "CLAM" (Crew Lunar Accommodations Module) element.
Limb bone allometry during postnatal ontogeny in non-avian dinosaurs
Kilbourne, Brandon M; Makovicky, Peter J
2010-01-01
Although the interspecific scaling of tetrapods is well understood, remarkably little work has been done on the ontogenetic scaling within tetrapod species, whether fossil or recent. Here the ontogenetic allometry of the femur, humerus, and tibia was determined for 23 species of non-avian dinosaur by regressing log-transformed length against log-transformed circumference for each bone using reduced major axis bivariate regression. The femora of large theropod species became more robust during ontogeny, whereas growth in the femora of sauropodomorphs and most ornithischians was not significantly different from isometry. Hadrosaur hindlimb elements became significantly more gracile during ontogeny. Scaling constants were higher in all theropods than in any non-theropod taxa. Such clear taxonomically correlated divisions were not evident in the ontogenetic allometry of the tibia and hindlimb bones did not scale uniformly within larger taxonomic groups. For taxa in which the ontogenetic allometry of the humerus was studied, only Riojasaurus incertus exhibited a significant departure from isometry. Using independent contrasts, the regression of femoral allometry against the log of adult body mass was found to have a significant negative correlation but such a relationship could not be established for other limb elements or growth parameters, mainly due to the small sample size. The intraspecific scaling patterns observed in dinosaurs and other amniotes do not support earlier hypotheses that intraspecific scaling differs between endothermic and ectothermic taxa. PMID:20557400
Scaling of near-wall flows in quasi-two-dimensional turbulent channels.
Samanta, D; Ingremeau, F; Cerbus, R; Tran, T; Goldburg, W I; Chakraborty, P; Kellay, H
2014-07-11
The law of the wall and the log law rule the near-wall mean velocity profile of three-dimensional turbulent flows. These well-known laws, which are validated by legions of experiments and simulations, may be universal. Here, using a soap-film channel, we report the first experimental test of these laws in quasi-two-dimensional turbulent channel flows under two disparate turbulent spectra. We find that despite the differences with three-dimensional flows, the laws prevail, albeit with notable distinctions: the two parameters of the log law are markedly distinct from their three-dimensional counterpart; further, one parameter (the von Kármán constant) is independent of the spectrum whereas the other (the offset of the log law) depends on the spectrum. Our results suggest that the classical theory of scaling in wall-bounded turbulence is incomplete wherein a key missing element is the link with the turbulent spectrum.
Preliminary geological investigation of AIS data at Mary Kathleen, Queensland, Australia
NASA Technical Reports Server (NTRS)
Huntington, J. F.; Green, A. A.; Craig, M. D.; Cocks, T. D.
1986-01-01
The Airborne Imaging Spectrometer (AIS) was flown over granitic, volcanic, and calc-silicate terrain around the Mary Kathleen Uranium Mine in Queensland, in a test of its mineralocial mapping capabilities. An analysis strategy and restoration and enhancement techniques were developed to process the 128 band AIS data. A preliminary analysis of one of three AIS flight lines shows that the data contains considerable spectral variation but that it is also contaminated by second-order leakage of radiation from the near-infrared region. This makes the recognition of expected spectral absorption shapes very difficult. The effect appears worst in terrains containing considerable vegetation. Techniques that try to predict this supplementary radiation coupled with the log residual analytical technique show that expected mineral absorption spectra can be derived. The techniques suggest that with additional refinement correction procedures, the Australian AIS data may be revised. Application of the log residual analysis method has proved very successful on the cuprite, Nevada data set, and for highlighting the alunite, linite, and SiOH mineralogy.
Teaching an Old Log New Tricks with Machine Learning.
Schnell, Krista; Puri, Colin; Mahler, Paul; Dukatz, Carl
2014-03-01
To most people, the log file would not be considered an exciting area in technology today. However, these relatively benign, slowly growing data sources can drive large business transformations when combined with modern-day analytics. Accenture Technology Labs has built a new framework that helps to expand existing vendor solutions to create new methods of gaining insights from these benevolent information springs. This framework provides a systematic and effective machine-learning mechanism to understand, analyze, and visualize heterogeneous log files. These techniques enable an automated approach to analyzing log content in real time, learning relevant behaviors, and creating actionable insights applicable in traditionally reactive situations. Using this approach, companies can now tap into a wealth of knowledge residing in log file data that is currently being collected but underutilized because of its overwhelming variety and volume. By using log files as an important data input into the larger enterprise data supply chain, businesses have the opportunity to enhance their current operational log management solution and generate entirely new business insights-no longer limited to the realm of reactive IT management, but extending from proactive product improvement to defense from attacks. As we will discuss, this solution has immediate relevance in the telecommunications and security industries. However, the most forward-looking companies can take it even further. How? By thinking beyond the log file and applying the same machine-learning framework to other log file use cases (including logistics, social media, and consumer behavior) and any other transactional data source.
First detection of hydrogen in the β Pictoris gas disk
NASA Astrophysics Data System (ADS)
Wilson, P. A.; Lecavelier des Etangs, A.; Vidal-Madjar, A.; Bourrier, V.; Hébrard, G.; Kiefer, F.; Beust, H.; Ferlet, R.; Lagrange, A.-M.
2017-03-01
The young and nearby star β Pictoris (β Pic) is surrounded by a debris disk composed of dust and gas known to host a myriad evaporating exocomets, planetesimals and at least one planet. At an edge-on inclination, as seen from Earth, this system is ideal for debris disk studies providing an excellent opportunity to use absorption spectroscopy to study the planet forming environment. Using the Cosmic Origins Spectrograph (COS) instrument on the Hubble Space Telescope (HST) we observe the most abundant element in the disk, hydrogen, through the H I Lyman α (Ly-α) line. We present a new technique to decrease the contamination of the Ly-α line by geocoronal airglow in COS spectra. This Airglow Virtual Motion (AVM) technique allows us to shift the Ly-α line of the astrophysical target away from the contaminating airglow emission revealing more of the astrophysical line profile. This new AVM technique, together with subtraction of an airglow emission map, allows us to analyse the shape of the β Pic Ly-α emission line profile and from it, calculate the column density of neutral hydrogen surrounding β Pic. The column density of hydrogen in the β Pic stable gas disk at the stellar radial velocity is measured to be log (NH/ 1 cm2) ≪ 18.5. The Ly-α emission line profile is found to be asymmetric and we propose that this is caused by H I falling in towards the star with a bulk radial velocity of 41 ± 6 km s-1 relative to β Pic and a column density of log (NH/ 1 cm2) = 18.6 ± 0.1. The high column density of hydrogen relative to the hydrogen content of CI chondrite meteorites indicates that the bulk of the hydrogen gas does not come from the dust in the disk. This column density reveals a hydrogen abundance much lower than solar, which excludes the possibility that the detected hydrogen could be a remnant of the protoplanetary disk or gas expelled by the star. We hypothesise that the hydrogen gas observed falling towards the star arises from the dissociation of water originating from evaporating exocomets.
Felmy, Heather M.; Bennett, Kevin T.; Clark, Sue B.
2017-05-12
To gain insight on the role of mixed solvents on the thermodynamic driving forces for the complexation between trivalent f-elements and organic ligands, solution phase thermodynamic parameters were determined for Eu(III) complexation with 2-hydroxyisobutyric acid (HIBA) and 2-aminoisobutyric acid (AIBA) in mixed methanol (MeOH)-water and N,N-dimethylformamide (DMF)-water solvents. Included in this study were the determination of mixed solvent autoprotolysis constants (pK α) as well as the thermodynamic formation constants: log β, ΔG, ΔH, and ΔS, for ligand protonation and Eu(III)-ligand complexation utilizing potentiometry and calorimetry techniques. The results presented are conditional thermodynamic values determined at an ionic strength of 1.0more » M NaClO 4 and a temperature of 298 K. It was found that moving from an aqueous solution to a binary aqueous-organic solvent affected all solution equilibria to some degree and that the extent of change depended on both the type of mixed solvent and the ligand in each study. Here, the ability to understand and predict these changes in thermodynamic values as a function of solvent composition provides important information about the chemistry of the trivalent f-elements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felmy, Heather M.; Bennett, Kevin T.; Clark, Sue B.
To gain insight on the role of mixed solvents on the thermodynamic driving forces for the complexation between trivalent f-elements and organic ligands, solution phase thermodynamic parameters were determined for Eu(III) complexation with 2-hydroxyisobutyric acid (HIBA) and 2-aminoisobutyric acid (AIBA) in mixed methanol (MeOH)-water and N,N-dimethylformamide (DMF)-water solvents. Included in this study were the determination of mixed solvent autoprotolysis constants (pK α) as well as the thermodynamic formation constants: log β, ΔG, ΔH, and ΔS, for ligand protonation and Eu(III)-ligand complexation utilizing potentiometry and calorimetry techniques. The results presented are conditional thermodynamic values determined at an ionic strength of 1.0more » M NaClO 4 and a temperature of 298 K. It was found that moving from an aqueous solution to a binary aqueous-organic solvent affected all solution equilibria to some degree and that the extent of change depended on both the type of mixed solvent and the ligand in each study. Here, the ability to understand and predict these changes in thermodynamic values as a function of solvent composition provides important information about the chemistry of the trivalent f-elements.« less
NASA Technical Reports Server (NTRS)
Karner, J. M.; Papike, J. J.; Shearer, C. K.; McKay, G.; Le, L.; Burger, P.
2007-01-01
Several studies, using different oxybarometers, have suggested that the variation of fO2 in martian basalts spans about 3 log units from approx. IW-1 to IW+2. The relatively oxidized basalts (e.g., pyroxene-phyric Shergotty) are enriched in incompatible elements, while the relatively reduced basalts (e.g., olivine-phyric Y980459) are depleted in incompatible elements. A popular interpretation of the above observations is that the martian mantle contains two reservoirs; 1) oxidized and enriched, and 2) reduced and depleted. The basalts are thus thought to represent mixing between these two reservoirs. Recently, Shearer et al. determined the fO2 of primitive olivine-phyric basalt Y980459 to be IW+0.9 using the partitioning of V between olivine and melt. In applying this technique to other basalts, Shearer et al. concluded that the martian mantle shergottite source was depleted and varied only slightly in fO2 (IW to IW+1). Thus the more oxidized, enriched basalts had assimilated a crustal component on their path to the martian surface. In this study we attempt to address the above debate on martian mantle fO2 using the partitioning of Cr and V into pyroxene in pyroxene-phyric basalt QUE 94201.
Hyatt, M.W.; Hubert, W.A.
2001-01-01
We developed a standard-weight (Ws) equation for brown trout (Salmo trutta) in lentic habitats by applying the regression-line-percentile technique to samples from 49 populations in North America. The proposed Ws equation is log10 Ws = -5.422 + 3.194 log10 TL, when Ws is in grams and TL is total length in millimeters. The English-unit equivalent is log10 Ws = -3.592 + 3.194 log10 TL, when Ws is in pounds and TL is total length in inches. The equation is applicable for fish of 140-750 mm TL. Proposed length-category standards to evaluate fish within populations are: stock, 200 mm (8 in); quality, 300 mm (12 in); preferred, 400 mm (16 in); memorable, 500 mm (20 in); and trophy, 600 mm (24 in).
NASA Astrophysics Data System (ADS)
Longo, M.; Keller, M.; Scaranello, M. A., Sr.; dos-Santos, M. N.; Xu, Y.; Huang, M.; Morton, D. C.
2017-12-01
Logging and understory fires are major drivers of tropical forest degradation, reducing carbon stocks and changing forest structure, composition, and dynamics. In contrast to deforested areas, sites that are disturbed by logging and fires retain some, albeit severely altered, forest structure and function. In this study we simulated selective logging using the Ecosystem Demography Model (ED-2) to investigate the impact of a broad range of logging techniques, harvest intensities, and recurrence cycles on the long-term dynamics of Amazon forests, including the magnitude and duration of changes in forest flammability following timber extraction. Model results were evaluated using eddy covariance towers at logged sites at the Tapajos National Forest in Brazil and data on long-term dynamics reported in the literature. ED-2 is able to reproduce both the fast (< 5yr) recovery of water, energy fluxes compared to flux tower, and the typical, field-observed, decadal time scales for biomass recovery when no additional logging occurs. Preliminary results using the original ED-2 fire model show that canopy cover loss of forests under high-intensity, conventional logging cause sufficient drying to support more intense fires. These results indicate that under intense degradation, forests may shift to novel disturbance regimes, severely reducing carbon stocks, and inducing long-term changes in forest structure and composition from recurrent fires.
Helmet and shoulder pad removal in football players with unstable cervical spine injuries.
Dahl, Michael C; Ananthakrishnan, Dheera; Nicandri, Gregg; Chapman, Jens R; Ching, Randal P
2009-05-01
Football, one of the country's most popular team sports, is associated with the largest overall number of sports-related, catastrophic, cervical spine injuries in the United States (Mueller, 2007). Patient handling can be hindered by the protective sports equipment worn by the athlete. Improper stabilization of these patients can exacerbate neurologic injury. Because of the lack of consensus on the best method for equipment removal, a study was performed comparing three techniques: full body levitation, upper torso tilt, and log roll. These techniques were performed on an intact and lesioned cervical spine cadaveric model simulating conditions in the emergency department. The levitation technique was found to produce motion in the anterior and right lateral directions. The tilt technique resulted in motions in the posterior left lateral directions, and the log roll technique generated motions in the right lateral direction and had the largest amount of increased instability when comparing the intact and lesioned specimen. These findings suggest that each method of equipment removal displays unique weaknesses that the practitioner should take into account, possibly on a patient-by-patient basis.
NASA Technical Reports Server (NTRS)
Waggoner, J. T.; Phinney, D. E. (Principal Investigator)
1981-01-01
Foreign Commodity Production Forecasting testing activities through June 1981 are documented. A log of test reports is presented. Standard documentation sets are included for each test. The documentation elements presented in each set are summarized.
ERIC Educational Resources Information Center
Schaffer, Nancy K.
1976-01-01
The development of a muralmaking project during a three-week summer institute in open education in Manhattan is described. Photographs and videotape provide a record to share with teachers and schools who wish to use muralmaking as a curriculum element. Preparation is recounted; participants' logs and community reactions are quoted. (AJ)
Robert Ross; John W. Forsman; John R. Erickson; Allen M. Brackley
2014-01-01
Stress-wave nondestructive evaluation (NDE) techniques are used widely in the forest products industryâfrom the grading of wood veneer to inspection of timber structures. Inspection professionals frequently use stress-wave NDE techniques to locate internal voids and decayed or deteriorated areas in large timbers. Although these techniques have proven useful, little...
Spectroscopy Made Easy: A New Tool for Fitting Observations with Synthetic Spectra
NASA Technical Reports Server (NTRS)
Valenti, J. A.; Piskunov, N.
1996-01-01
We describe a new software package that may be used to determine stellar and atomic parameters by matching observed spectra with synthetic spectra generated from parameterized atmospheres. A nonlinear least squares algorithm is used to solve for any subset of allowed parameters, which include atomic data (log gf and van der Waals damping constants), model atmosphere specifications (T(sub eff, log g), elemental abundances, and radial, turbulent, and rotational velocities. LTE synthesis software handles discontiguous spectral intervals and complex atomic blends. As a demonstration, we fit 26 Fe I lines in the NSO Solar Atlas (Kurucz et al.), determining various solar and atomic parameters.
NASA Technical Reports Server (NTRS)
Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.
2002-01-01
Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.
NASA Astrophysics Data System (ADS)
Silversides, Katherine L.; Melkumyan, Arman
2017-03-01
Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.
Visualization of usability and functionality of a professional website through web-mining.
Jones, Josette F; Mahoui, Malika; Gopa, Venkata Devi Pragna
2007-10-11
Functional interface design requires understanding of the information system structure and the user. Web logs record user interactions with the interface, and thus provide some insight into user search behavior and efficiency of the search process. The present study uses a data-mining approach with techniques such as association rules, clustering and classification, to visualize the usability and functionality of a digital library through in depth analyses of web logs.
Deschênes, Philippe; Chano, Frédéric; Dionne, Léa-Laurence; Pittet, Didier; Longtin, Yves
2017-08-01
The efficacy of the World Health Organization (WHO)-recommended handwashing technique against Clostridium difficile is uncertain, and whether it could be improved remains unknown. Also, the benefit of using a structured technique instead of an unstructured technique remains unclear. This study was a prospective comparison of 3 techniques (unstructured, WHO, and a novel technique dubbed WHO shortened repeated [WHO-SR] technique) to remove C difficile. Ten participants were enrolled and performed each technique. Hands were contaminated with 3 × 10 6 colony forming units (CFU) of a nontoxigenic strain containing 90% spores. Efficacy was assessed using the whole-hand method. The relative efficacy of each technique and of a structured (either WHO or WHO-SR) vs an unstructured technique were assessed by Mann-Whitney U test and Wilcoxon signed-rank test. The median effectiveness of the unstructured, WHO, and WHO-SR techniques in log 10 CFU reduction was 1.30 (interquartile range [IQR], 1.27-1.43), 1.71 (IQR, 1.34-1.91), and 1.70 (IQR, 1.54-2.42), respectively. The WHO-SR technique was significantly more efficacious than the unstructured technique (P = .01). Washing hands with a structured technique was more effective than washing with an unstructured technique (median, 1.70 vs 1.30 log 10 CFU reduction, respectively; P = .007). A structured washing technique is more effective than an unstructured technique against C difficile. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
NASA Astrophysics Data System (ADS)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
HD 66051, an eclipsing binary hosting a highly peculiar, HgMn-related star.
Niemczura, Ewa; Hümmerich, Stefan; Castelli, Fiorella; Paunzen, Ernst; Bernhard, Klaus; Hambsch, Franz-Josef; Hełminiak, Krzysztof
2017-07-19
HD 66051 is an eclipsing system with an orbital period of about 4.75 d that exhibits out-of-eclipse variability with the same period. New multicolour photometric observations confirm the longevity of the secondary variations, which we interpret as a signature of surface inhomogeneities on one of the components. Using archival and newly acquired high-resolution spectra, we have performed a detailed abundance analysis. The primary component is a slowly rotating late B-type star (T eff = 12500 ± 200 K; log g = 4.0, v sin i = 27 ± 2 km s -1 ) with a highly peculiar composition reminiscent of the singular HgMn-related star HD 65949, which seems to be its closest analogue. Some light elements as He, C, Mg, Al are depleted, while Si and P are enhanced. Except for Ni, all the iron-group elements, as well as most of the heavy elements, and in particular the REE elements, are overabundant. The secondary component was estimated to be a slowly rotating A-type star (T eff ~ 8000 K; log g = 4.0, v sin i ~ 18 km s -1 ). The unique configuration of HD 66051 opens up intriguing possibilities for future research, which might eventually and significantly contribute to the understanding of such diverse phenomena as atmospheric structure, mass transfer, magnetic fields, photometric variability and the origin of chemical anomalies observed in HgMn stars and related objects.
29 CFR 1960.69 - Retention and updating of old forms.
Code of Federal Regulations, 2010 CFR
2010-07-01
....69 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) BASIC PROGRAM ELEMENTS FOR FEDERAL EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH... continue to provide access to the data as though these forms were the OSHA Form 300 Log and Form 301...
29 CFR 1960.69 - Retention and updating of old forms.
Code of Federal Regulations, 2011 CFR
2011-07-01
....69 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) BASIC PROGRAM ELEMENTS FOR FEDERAL EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH... continue to provide access to the data as though these forms were the OSHA Form 300 Log and Form 301...
Publications - GMC 376 | Alaska Division of Geological & Geophysical
Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska DGGS GMC 376 Publication Details Title: NWE Drill Logs for the Orange Hill Property, Nabesna Quadrangle , Alaska: 1973 and 1974 Drill holes No. 112 through No. 123 Authors: Northwest Explorations Publication
Publications - GMC 389 | Alaska Division of Geological & Geophysical
Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska DGGS GMC 389 Publication Details Title: Core photographs, assay results, and 1988 drill logs from the Cominco DDH-1 through DDH-4 boreholes, Shadow Prospect, Tyonek Quadrangle, Alaska Authors: Millrock
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
2009-09-01
kFc ) is shown to fit the Knoop data quite well. A plot of log10 (HK) vs. log10 (F) yielded easily comparable straight lines, whose slope and...AlON), silicon carbide, aluminum oxide and boron carbide. A power-law equation (H = kFc ) is shown to fit the Knoop data quite well. A plot of log10... kFc HK= 24.183 F-0.0699 R2= 0.97 H K (G Pa ) Load (N) HK = a/F + b ErrorValue 0.919483.7367a 0.6903619.361b NA25.591Chisq NA0.67368R2 1 1.1 1.2
Paillet, F.L.
1995-01-01
Hydraulic properties of heterogeneous fractured aquifers are difficult to characterize, and such characterization usually requires equipment-intensive and time-consuming applications of hydraulic testing in situ. Conventional coring and geophysical logging techniques provide useful and reliable information on the distribution of bedding planes, fractures and solution openings along boreholes, but it is often unclear how these locally permeable features are organized into larger-scale zones of hydraulic conductivity. New boreholes flow-logging equipment provides techniques designed to identify hydraulically active fractures intersecting boreholes, and to indicate how these fractures might be connected to larger-scale flow paths in the surrounding aquifer. Potential complications in interpreting flowmeter logs include: 1) Ambient hydraulic conditions that mask the detection of hydraulically active fractures; 2) Inability to maintain quasi-steady drawdowns during aquifer tests, which causes temporal variations in flow intensity to be confused with inflows during pumping; and 3) Effects of uncontrolled background variations in hydraulic head, which also complicate the interpretation of inflows during aquifer tests. Application of these techniques is illustrated by the analysis of cross-borehole flowmeter data from an array of four bedrock boreholes in granitic schist at the Mirror Lake, New Hampshire, research site. Only two days of field operations were required to unambiguously identify the few fractures or fracture zones that contribute most inflow to boreholes in the CO borehole array during pumping. Such information was critical in the interpretation of water-quality data. This information also permitted the setting of the available string of two packers in each borehole so as to return the aquifer as close to pre-drilling conditions as possible with the available equipment.
Spent Fuel Test-Climax: core logging for site investigation and instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilder, D.G.; Yow, J.L. Jr.; Thorpe, R.K.
1982-05-28
As an integral part of the Spent Fuel Test-Climax 5150 ft (1570 m) of granite core was obtained. This core was diamond drilled in various sizes, mainly 38-mm and 76-mm diameters. The core was teken with single tube core barrels and was unoriented. Techniques used to drill and log this core are discussed, as well as techniques to orient the core. Of the 5150 ft (1570 m) of core more than 3645 ft (1111 m) was retained and logged in some detail. As a result of the core logging, geologic discontinuities were identified, joint frequency and spacing characterized. Discontinuities identifiedmore » included several joint sets, shear zones and faults. Correlations based on coring along were generally found to be impossible, even for the more prominent features. The only feature properly correlated from the exploratory drilling was the fault system at the end of the facility, but it was not identified from the exploratory core as a fault. Identification of discontinuities was later helped by underground mapping that identified several different joint sets with different characteristics. It was found that joint frequency varied from 0.3 to 1.1 joint per foot of core for open fractures and from 0.3 to 3.3/ft for closed or healed fractures. Histograms of fracture spacing indicate that there is likely a random distribution of spacing superimposed upon uniformly spaced fractures. It was found that a low angle joint set had a persistent mean orientation. These joints were healed and had pervasive wall rock alteration which made identification of joints in this set possible. The recognition of a joint set with known attitude allowed orientation of much of the core. This orientation technique was found to be effective. 10 references, 25 figures, 4 tables.« less
Partitioning of fluorotelomer alcohols to octanol and different sources of dissolved organic carbon.
Carmosini, Nadia; Lee, Linda S
2008-09-01
Interest in the environmental fate of fluorotelomer alcohols (FTOHs) has spurred efforts to understand their equilibrium partitioning behavior. Experimentally determined partition coefficients for FTOHs between soil/water and air/water have been reported, but direct measurements of partition coefficients for dissolved organic carbon (DOC)/water (K(doc)) and octanol/ water(K(ow)) have been lacking. Here we measured the partitioning of 8:2 and 6:2 FTOH between one or more types of DOC and water using enhanced solubility or dialysis bag techniques, and also quantified K(ow) values for 4:2 to 8:2 FTOH using a batch equilibration method. The range in measured log K(doc) values for 8:2 FTOH using the enhanced solubility technique with DOC derived from two soils, two biosolids, and three reference humic acids is 2.00-3.97 with the lowest values obtained for the biosolids and an average across all other DOC sources (biosolid DOC excluded) of 3.54 +/- 0.29. For 6:2 FTOH and Aldrich humic acid, a log K(doc) value of 1.96 +/- 0.45 was measured using the dialysis technique. These average values are approximately 1 to 2 log units lower than previously indirectly estimated K(doc) values. Overall, the affinity for DOC tends to be slightly lower than that for particulate soil organic carbon. Measured log K(ow) values for 4:2 (3.30 +/- 0.04), 6:2 (4.54 +/- 0.01), and 8:2 FTOH (5.58 +/- 0.06) were in good agreement with previously reported estimates. Using relationships between experimentally measured partition coefficients and C-atom chain length, we estimated K(doc) and K(ow) values for shorter and longer chain FTOHs, respectively, that we were unable to measure experimentally.
Stumm, Frederick; Chu, Anthony; Monti, Jack
2004-01-01
Advanced borehole-geophysical techniques were used to assess the geohydrology of crystalline bedrock in 20 boreholes on the southern part of Manhattan Island, N.Y., in preparation for construction of a third water tunnel for New York City. The borehole-logging techniques included natural gamma, single-point resistance, short-normal resistivity, mechanical and acoustic caliper, magnetic susceptibility, borehole-fluid temperature and resistivity, borehole-fluid specific conductance, dissolved oxygen, pH, redox, heatpulse flowmeter (at selected boreholes), borehole deviation, acoustic and optical televiewer, and borehole radar (at selected boreholes). Hydraulic head and specific-capacity test data were collected from 29 boreholes. The boreholes penetrated gneiss, schist, and other crystalline bedrock that has an overall southwest to northwest-dipping foliation. Most of the fractures penetrated are nearly horizontal or have moderate- to high-angle northwest or eastward dip azimuths. Foliation dip within the potential tunnel-construction zone is northwestward and southeastward in the proposed North Water-Tunnel, northwestward to southwestward in the proposed Midtown Water-Tunnel, and northwestward to westward dipping in the proposed South Water-Tunnel. Fracture population dip azimuths are variable. Heat-pulse flowmeter logs obtained under pumping and nonpumping (ambient) conditions, together with other geophysical logs, indicate transmissive fracture zones in each borehole. The 60-megahertz directional borehole-radar logs delineated the location and orientation of several radar reflectors that did not intersect the projection of the borehole.Fracture indexes range from 0.12 to 0.93 fractures per foot of borehole. Analysis of specific-capacity tests from each borehole indicated that transmissivity ranges from 2 to 459 feet squared per day; the highest transmissivity is at the Midtown Water-Tunnel borehole (E35ST-D).
(F)UV Spectral Analysis of 15 Hot, Hydrogen-Rich Central Stars of PNe
NASA Astrophysics Data System (ADS)
Ziegler, Marc
2013-07-01
The aim of this thesis was the precise determination of basic stellar parameters and metal abundances for a sample of 15 ionizing stars of gaseous nebulae. Strategic lines of metals for the expected parameter range are located in the ultraviolet (UV) and far-ultraviolet (FUV) range. Thus high-resolution, high-S/N UV and FUV observations obtained with the Hubble Space Telescope (HST) and the Far Ultraviolet Spectroscopic Explorer (FUSE) were used for the analysis. For the calculation of the necessary spectral energy distributions the Tübingen NLTE Model-Atmosphere Package (TMAP) was used. The model atmospheres included most elements from H - Ni in order to account for line-blanketing effects. For each object a small grid of model atmospheres was calculated. As the interstellar medium (ISM) imprints its influence in the Space Telescope Imaging Spectrograph (STIS) and especially the FUSE range, the program OWENS was employed to calculate the interstellar absorption features. Both, the photospheric model spectral energy distribution (SED) as well as the ISM models were combined to enable the identification of most of the observed absorption lines. The analyzed sample covers a range of 70 kK < Teff < 136 kK, and surface gravities from log (g/cm/sec^2) = 5.4 - 7.4, thus representing different stages of stellar evolution. For a large number of elements, abundances were determined for the first time in these objects. Lines of C, N, O, F, Ne, Si, P, S, and Ar allowed to determine the corresponding abundances. For none of the objects lines of Ca, Sc, Ti, and V could be found. Only a few objects were rich in Cr, Mn, Fe, Co, and Ni lines. Most of the analyzed stars exhibited only lines of Fe (ionization stages V - VIII) from the iron-group elements. No signs for gravitational settling (the gravitational force exceeds the radiation pressure and elements begin to sink from the atmosphere into deeper layers) were found. This is expected as the values of the surface gravities of the sample are still too small to start gravitational settling. For the elements C, N, O, Si, P, and S we find increasing abundances with increasing log(Teff^4/g), while the abundances for Ar and Fe decrease. The latter is unexpected as the higher the Teff^4/g ratio, the more the radiative force dominates the gravitational force and, thus, the elements should be kept in the atmosphere. The determined abundances were compared with previous literature values, with abundances predicted from diusion calculations, with abundances from Asymptotic Giant Branch (AGB) nucleosynthesis calculations, and, if available, with abundances found for the corresponding nebulae. The agreement was of mixed quality. The derived Teff and log g values confirmed some literature values while others had to be revised (e.g. for LSS 1362 and NGC1360). However, most of them agree with the previous literature values within the error limits. No difference in Teff can be found for DAO and O(H)-type stars, but O(H)-type stars have a lower log g (5.4 - 6.0) compared to the DAOs (6.5 - 7.4). The exception is the O(H)-type central star of the planetary nebula (CSPN) of Lo 1 with log g = 7.0. A comparison of the positions of each object with stellar evolutionary tracks for post-AGB stars in the log Teff - log g diagram lead to the respective stellar masses. The derived mean mass of the analyzed sample (M = 0.536 ± 0.023 Msol) agrees within the error limits with the expected mean mass for these objects. In the literature M = 0.638 - 0.145 Msol can be found for DA-type white dwarfs, the immediate successors of DAO-type white dwarfs. For two objects (A 35, Sh 2-174) extremely low masses were found. For A35 the derived mass (M_A35 = 0.523 ± 0.05Msol) lies at the lower end of possible masses predicted for post-AGB stars. The very low mass of Sh 2-174 (M_Sh 2-174 = 0.395 ± 0.05Msol) points at Sh 2-174 being a post-extended horizontal branch (EHB) star and not a CSPN. If a stellar mass is too low, it is impossible for the star to reach the thermally pulsing AGB phase and, thus, to develope a planetary nebula (PN). Post-EHB stars evolve directly from the Horizontal Branch (HB) to the white dwarf (WD) cooling sequence. The low masses for A35 and Sh 2-174 support literature works that classify the two corresponding nebulae as ionized H II regions and not as PNe.
Frequency Response of a Protein to Local Conformational Perturbations
Eren, Dilek; Alakent, Burak
2013-01-01
Signals created by local perturbations are known to propagate long distances through proteins via backbone connectivity and nonbonded interactions. In the current study, signal propagation from the flexible ligand binding loop to the rest of Protein Tyrosine Phosphatase 1B (PTP1B) was investigated using frequency response techniques. Using restrained Targeted Molecular Dynamics (TMD) potential on WPD and R loops, PTP1B was driven between its crystal structure conformations at different frequencies. Propagation of the local perturbation signal was manifested via peaks at the fundamental frequency and upper harmonics of 1/f distributed spectral density of atomic variables, such as Cα atoms, dihedral angles, or polar interaction distances. Frequency of perturbation was adjusted high enough (simulation length >∼10×period of a perturbation cycle) not to be clouded by random diffusional fluctuations, and low enough (<∼0.8 ns−1) not to attenuate the propagating signal and enhance the contribution of the side-chains to the dissipation of the signals. Employing Discrete Fourier Transform (DFT) to TMD simulation trajectories of 16 cycles of conformational transitions at periods of 1.2 to 5 ns yielded Cα displacements consistent with those obtained from crystal structures. Identification of the perturbed atomic variables by statistical t-tests on log-log scale spectral densities revealed the extent of signal propagation in PTP1B, while phase angles of the filtered trajectories at the fundamental frequency were used to cluster collectively fluctuating elements. Hydrophobic interactions were found to have a higher contribution to signal transduction between side-chains compared to the role of polar interactions. Most of in-phase fluctuating residues on the signaling pathway were found to have high identity among PTP domains, and located over a wide region of PTP1B including the allosteric site. Due to its simplicity and efficiency, the suggested technique may find wide applications in identification of signaling pathways of different proteins. PMID:24086121
Practical life log video indexing based on content and context
NASA Astrophysics Data System (ADS)
Tancharoen, Datchakorn; Yamasaki, Toshihiko; Aizawa, Kiyoharu
2006-01-01
Today, multimedia information has gained an important role in daily life and people can use imaging devices to capture their visual experiences. In this paper, we present our personal Life Log system to record personal experiences in form of wearable video and environmental data; in addition, an efficient retrieval system is demonstrated to recall the desirable media. We summarize the practical video indexing techniques based on Life Log content and context to detect talking scenes by using audio/visual cues and semantic key frames from GPS data. Voice annotation is also demonstrated as a practical indexing method. Moreover, we apply body media sensors to record continuous life style and use body media data to index the semantic key frames. In the experiments, we demonstrated various video indexing results which provided their semantic contents and showed Life Log visualizations to examine personal life effectively.
NASA Astrophysics Data System (ADS)
Valdebenito, Galo; Tonon, Alessia; Iroume, Andrés; Alvarado, David; Fuentes, Carlos; Picco, Lorenzo; Lenzi, Mario
2016-04-01
To date, the study of in-stream wood in rivers has been focused mainly on quantifying wood pieces deposited above the ground. However, in some particular river systems, the presence of buried dead wood can also represent an important component of wood recruitment and budgeting dynamics. This is the case of the Blanco River (Southern Chile) severely affected by the eruption of Chaitén Volcano occurred between 2008 and 2009. The high pyroclastic sediment deposition and transport affected the channel and the adjacent forest, burying wood logs and standing trees. The aim of this contribution is to assess the presence and distribution of wood in two study areas (483 m2 and 1989 m2, respectively) located along the lower streambank of the Blanco River, and covered by thick pyroclastic deposition up to 5 m. The study areas were surveyed using two different devices, a Terrestrial Laser Scanner (TLS) and a Ground Penetrating Radar (GPR). The first was used to scan the above surface achieving a high point cloud density (≈ 2000 points m-2) which allowed us to identify and measure the wood volume. The second, was used to characterize the internal morphology of the volcanic deposits and to detect the presence and spatial distribution of buried wood up to a depth of 4 m. Preliminary results have demonstrated differences in the numerousness and volume of above wood between the two study areas. In the first one, there were 43 wood elements, 33 standing trees and 10 logs, with a total volume of 2.96 m3 (109.47 m3 km-1), whereas the second one was characterized by the presence of just 7 standing trees and 11 wood pieces, for a total amount of 0.77 m3 (7.73 m3 km-1). The dimensions of the wood elements vary greatly according to the typology, standing trees show the higher median values in diameter and length (0.15 m and 2.91 m, respectively), whereas the wood logs were smaller (0.06 m and 1.12 m, respectively). The low dimensions of deposited wood can be probably connected to their origin, suggesting that these elements were generated by toppling and breaking of surrounding dead trees. Results obtained with the GPR confirm the ability of this instrument to localize the presence and distribution of buried wood. From the 3-D analysis it was possible to assess the spatial distribution and to estimate, as first approach, the volume of the buried wood which represents approximately 0.04% of the entire volcanic deposit. Further analysis will focus on additional GPR calibration with different wood sizes for a more accurate estimation of the volume. The knowledge of the overall wood amount stored in a fluvial system that can be remobilized over time, represent an essential factor to ensure better forest and river management actions.
Copy-move forgery detection utilizing Fourier-Mellin transform log-polar features
NASA Astrophysics Data System (ADS)
Dixit, Rahul; Naskar, Ruchira
2018-03-01
In this work, we address the problem of region duplication or copy-move forgery detection in digital images, along with detection of geometric transforms (rotation and rescale) and postprocessing-based attacks (noise, blur, and brightness adjustment). Detection of region duplication, following conventional techniques, becomes more challenging when an intelligent adversary brings about such additional transforms on the duplicated regions. In this work, we utilize Fourier-Mellin transform with log-polar mapping and a color-based segmentation technique using K-means clustering, which help us to achieve invariance to all the above forms of attacks in copy-move forgery detection of digital images. Our experimental results prove the efficiency of the proposed method and its superiority to the current state of the art.
Environmental and Genetic Factors Explain Differences in Intraocular Scattering.
Benito, Antonio; Hervella, Lucía; Tabernero, Juan; Pennos, Alexandros; Ginis, Harilaos; Sánchez-Romera, Juan F; Ordoñana, Juan R; Ruiz-Sánchez, Marcos; Marín, José M; Artal, Pablo
2016-01-01
To study the relative impact of genetic and environmental factors on the variability of intraocular scattering within a classical twin study. A total of 64 twin pairs, 32 monozygotic (MZ) (mean age: 54.9 ± 6.3 years) and 32 dizygotic (DZ) (mean age: 56.4 ± 7.0 years), were measured after a complete ophthalmologic exam had been performed to exclude all ocular pathologies that increase intraocular scatter as cataracts. Intraocular scattering was evaluated by using two different techniques based on a straylight parameter log(S) estimation: a compact optical instrument based in the principle of optical integration and a psychophysical measurement. Intraclass correlation coefficients (ICC) were used as descriptive statistics of twin resemblance, and genetic models were fitted to estimate heritability. No statistically significant difference was found for MZ and DZ groups for age (P = 0.203), best-corrected visual acuity (P = 0.626), cataract gradation (P = 0.701), sex (P = 0.941), optical log(S) (P = 0.386), or psychophysical log(S) (P = 0.568), with only a minor difference in equivalent sphere (P = 0.008). Intraclass correlation coefficients between siblings were similar for scatter parameters: 0.676 in MZ and 0.471 in DZ twins for optical log(S); 0.533 in MZ twins and 0.475 in DZ twins for psychophysical log(S). For equivalent sphere, ICCs were 0.767 in MZ and 0.228 in DZ twins. Conservative estimates of heritability for the measured scattering parameters were 0.39 and 0.20, respectively. Correlations of intraocular scatter (straylight) parameters in the groups of identical and nonidentical twins were similar. Heritability estimates were of limited magnitude, suggesting that genetic and environmental factors determine the variance of ocular straylight in healthy middle-aged adults.
Kasurinen, Stefanie; Jalava, Pasi I; Happo, Mikko S; Sippula, Olli; Uski, Oskari; Koponen, Hanna; Orasche, Jürgen; Zimmermann, Ralf; Jokiniemi, Jorma; Hirvonen, Maija-Riitta
2017-05-01
According to the World Health Organization particulate emissions from the combustion of solid fuels caused more than 110,000 premature deaths worldwide in 2010. Log wood combustion is the most prevalent form of residential biomass heating in developed countries, but it is unknown how the type of wood logs used in furnaces influences the chemical composition of the particulate emissions and their toxicological potential. We burned logs of birch, beech and spruce, which are used commonly as firewood in Central and Northern Europe in a modern masonry heater, and compared them to the particulate emissions from an automated pellet boiler fired with softwood pellets. We determined the chemical composition (elements, ions, and carbonaceous compounds) of the particulate emissions with a diameter of less than 1 µm and tested their cytotoxicity, genotoxicity, inflammatory potential, and ability to induce oxidative stress in a human lung epithelial cell line. The chemical composition of the samples differed significantly, especially with regard to the carbonaceous and metal contents. Also the toxic effects in our tested endpoints varied considerably between each of the three log wood combustion samples, as well as between the log wood combustion samples and the pellet combustion sample. The difference in the toxicological potential of the samples in the various endpoints indicates the involvement of different pathways of toxicity depending on the chemical composition. All three emission samples from the log wood combustions were considerably more toxic in all endpoints than the emissions from the pellet combustion. © 2016 Wiley Periodicals, Inc. Environ Toxicol 32: 1487-1499, 2017. © 2016 Wiley Periodicals, Inc.
Implementation of polyatomic MCTDHF capability
NASA Astrophysics Data System (ADS)
Haxton, Daniel; Jones, Jeremiah; Rescigno, Thomas; McCurdy, C. William; Ibrahim, Khaled; Williams, Sam; Vecharynski, Eugene; Rouet, Francois-Henry; Li, Xiaoye; Yang, Chao
2015-05-01
The implementation of the Multiconfiguration Time-Dependent Hartree-Fock method for poly- atomic molecules using a cartesian product grid of sinc basis functions will be discussed. The focus will be on two key components of the method: first, the use of a resolution-of-the-identity approximation; sec- ond, the use of established techniques for triple Toeplitz matrix algebra using fast Fourier transform over distributed memory architectures (MPI 3D FFT). The scaling of two-electron matrix element transformations is converted from O(N4) to O(N log N) by including these components. Here N = n3, with n the number of points on a side. We test the prelim- inary implementation by calculating absorption spectra of small hydro- carbons, using approximately 16-512 points on a side. This work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under the Early Career program, and by the offices of BES and Advanced Scientific Computing Research, under the SciDAC program.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
A method to evaluate hydraulic fracture using proppant detection.
Liu, Juntao; Zhang, Feng; Gardner, Robin P; Hou, Guojing; Zhang, Quanying; Li, Hu
2015-11-01
Accurate determination of the proppant placement and propped fracture height are important for evaluating and optimizing stimulation strategies. A technology using non-radioactive proppant and a pulsed neutron gamma energy spectra logging tool to determine the placement and height of propped fractures is proposed. Gd2O3 was incorporated into ceramic proppant and a Monte Carlo method was utilized to build the logging tools and formation models. Characteristic responses of the recorded information of different logging tools to fracture widths, proppant concentrations and influencing factors were studied. The results show that Gd capture gamma rays can be used to evaluate propped fractures and it has higher sensitivity to the change of fracture width and traceable proppant content compared with the exiting non-radioactive proppant evaluation techniques and only an after-fracture measurement is needed for the new method; The changes in gas saturation and borehole size have a great impact on determining propped fractures when compensated neutron and pulsed neutron capture tool are used. A field example is presented to validate the application of the new technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Eliminating log rolling as a spine trauma order.
Conrad, Bryan P; Rossi, Gianluca Del; Horodyski, Mary Beth; Prasarn, Mark L; Alemi, Yara; Rechtine, Glenn R
2012-01-01
Currently, up to 25% of patients with spinal cord injuries may experience neurologic deterioration during the initial management of their injuries. Therefore, more effective procedures need to be established for the transportation and care of these to reduce the risk of secondary neurologic damage. Here, we present more acceptable methods to minimize motion in the unstable spine during the management of patients with traumatic spine injuries. This review summarizes more than a decade of research aimed at evaluating different methods of caring for patients with spine trauma. The most commonly utilized technique to transport spinal cord injured patients, the log rolling maneuver, produced more motion than placing a patient on a spine board, removing a spine board, performing continuous lateral therapy, and positioning a patient prone for surgery. Alternative maneuvers that produced less motion included the straddle lift and slide, 6 + lift and slide, scoop stretcher, mechanical kinetic therapy, mechanical transfers, and the use of the operating table to rotate the patient to the prone position for surgical stabilization. The log roll maneuver should be removed from the trauma response guidelines for patients with suspected spine injuries, as it creates significantly more motion in the unstable spine than the readily available alternatives. The only exception is the patient who is found prone, in which case the patient should then be log rolled directly on to the spine board utilizing a push technique.
Bicknell, Jake E; Struebig, Matthew J; Davies, Zoe G; Baraloto, Christopher
2015-01-01
Over 20% of the world's tropical forests have been selectively logged, and large expanses are allocated for future timber extraction. Reduced-impact logging (RIL) is being promoted as best practice forestry that increases sustainability and lowers CO2 emissions from logging, by reducing collateral damage associated with timber extraction. RIL is also expected to minimize the impacts of selective logging on biodiversity, although this is yet to be thoroughly tested. We undertake the most comprehensive study to date to investigate the biodiversity impacts of RIL across multiple taxonomic groups. We quantified birds, bats and large mammal assemblage structures, using a before-after control-impact (BACI) design across 20 sample sites over a 5-year period. Faunal surveys utilized point counts, mist nets and line transects and yielded >250 species. We examined assemblage responses to logging, as well as partitions of feeding guild and strata (understorey vs. canopy), and then tested for relationships with logging intensity to assess the primary determinants of community composition. Community analysis revealed little effect of RIL on overall assemblages, as structure and composition were similar before and after logging, and between logging and control sites. Variation in bird assemblages was explained by natural rates of change over time, and not logging intensity. However, when partitioned by feeding guild and strata, the frugivorous and canopy bird ensembles changed as a result of RIL, although the latter was also associated with change over time. Bats exhibited variable changes post-logging that were not related to logging, whereas large mammals showed no change at all. Indicator species analysis and correlations with logging intensities revealed that some species exhibited idiosyncratic responses to RIL, whilst abundance change of most others was associated with time. Synthesis and applications. Our study demonstrates the relatively benign effect of reduced-impact logging (RIL) on birds, bats and large mammals in a neotropical forest context, and therefore, we propose that forest managers should improve timber extraction techniques more widely. If RIL is extensively adopted, forestry concessions could represent sizeable and important additions to the global conservation estate – over 4 million km2. PMID:25954054
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
NASA Astrophysics Data System (ADS)
Owen, D. Des. R.; Pawlowsky-Glahn, V.; Egozcue, J. J.; Buccianti, A.; Bradd, J. M.
2016-08-01
Isometric log ratios of proportions of major ions, derived from intuitive sequential binary partitions, are used to characterize hydrochemical variability within and between coal seam gas (CSG) and surrounding aquifers in a number of sedimentary basins in the USA and Australia. These isometric log ratios are the coordinates corresponding to an orthonormal basis in the sample space (the simplex). The characteristic proportions of ions, as described by linear models of isometric log ratios, can be used for a mathematical-descriptive classification of water types. This is a more informative and robust method of describing water types than simply classifying a water type based on the dominant ions. The approach allows (a) compositional distinctions between very similar water types to be made and (b) large data sets with a high degree of variability to be rapidly assessed with respect to particular relationships/compositions that are of interest. A major advantage of these techniques is that major and minor ion components can be comprehensively assessed and subtle processes—which may be masked by conventional techniques such as Stiff diagrams, Piper plots, and classic ion ratios—can be highlighted. Results show that while all CSG groundwaters are dominated by Na, HCO3, and Cl ions, the proportions of other ions indicate they can evolve via different means and the particular proportions of ions within total or subcompositions can be unique to particular basins. Using isometric log ratios, subtle differences in the behavior of Na, K, and Cl between CSG water types and very similar Na-HCO3 water types in adjacent aquifers are also described. A complementary pair of isometric log ratios, derived from a geochemically-intuitive sequential binary partition that is designed to reflect compositional variability within and between CSG groundwater, is proposed. These isometric log ratios can be used to model a hydrochemical pathway associated with methanogenesis and/or to delineate groundwater associated with high gas concentrations.
NASA Astrophysics Data System (ADS)
Schneiderwind, Sascha; Mason, Jack; Wiatr, Thomas; Papanikolaou, Ioannis; Reicherter, Klaus
2016-03-01
Two normal faults on the island of Crete and mainland Greece were studied to test an innovative workflow with the goal of obtaining a more objective palaeoseismic trench log, and a 3-D view of the sedimentary architecture within the trench walls. Sedimentary feature geometries in palaeoseismic trenches are related to palaeoearthquake magnitudes which are used in seismic hazard assessments. If the geometry of these sedimentary features can be more representatively measured, seismic hazard assessments can be improved. In this study more representative measurements of sedimentary features are achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of ISO (iterative self-organising) cluster analysis of a true colour photomosaic representing the spectrum of visible light. Photomosaic acquisition disadvantages (e.g. illumination) were addressed by complementing the data set with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D interpretation of attached 2-D ground-penetrating radar (GPR) profiles collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements, and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. This manuscript combines multiparametric approaches and shows (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GPR techniques, and (ii) how a multispectral digital analysis can offer additional advantages to interpret palaeoseismic and stratigraphic data. The multispectral data sets are stored allowing unbiased input for future (re)investigations.
Bao, James J; Liu, Xiaojing; Zhang, Yong; Li, Youxin
2014-09-15
This paper describes the development of a novel high-throughput hollow fiber membrane solvent microextraction technique for the simultaneous measurement of the octanol/water distribution coefficient (logD) for organic compounds such as drugs. The method is based on a designed system, which consists of a 96-well plate modified with 96 hollow fiber membrane tubes and a matching lid with 96 center holes and 96 side holes distributing in 96 grids. Each center hole was glued with a sealed on one end hollow fiber membrane tube, which is used to separate the aqueous phase from the octanol phase. A needle, such as microsyringe or automatic sampler, can be directly inserted into the membrane tube to deposit octanol as the accepted phase or take out the mixture of the octanol and the drug. Each side hole is filled with aqueous phase and could freely take in/out solvent as the donor phase from the outside of the hollow fiber membranes. The logD can be calculated by measuring the drug concentration in each phase after extraction equilibrium. After a comprehensive comparison, the polytetrafluoroethylene hollow fiber with the thickness of 210 μm, an extraction time of 300 min, a temperature of 25 °C and atmospheric pressure without stirring are selected for the high throughput measurement. The correlation coefficient of the linear fit of the logD values of five drugs determined by our system to reference values is 0.9954, showed a nice accurate. The -8.9% intra-day and -4.4% inter-day precision of logD for metronidazole indicates a good precision. In addition, the logD values of eight drugs were simultaneously and successfully measured, which indicated that the 96 throughput measure method of logD value was accurate, precise, reliable and useful for high throughput screening. Copyright © 2014 Elsevier B.V. All rights reserved.
Log-less metadata management on metadata server for parallel file systems.
Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.
Log-Less Metadata Management on Metadata Server for Parallel File Systems
Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093
FORTE antenna element and release mechanism design
NASA Technical Reports Server (NTRS)
Rohweller, David J.; Butler, Thomas A.
1995-01-01
The Fast On-Orbit Recording of Transient Events (FORTE) satellite being built by Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) has as its most prominent feature a large deployable (11 m by 5 m) log periodic antenna to monitor emissions from electrical storms on the Earth. This paper describes the antenna and the design for the long elements and explains the dynamics of their deployment and the damping system employed. It also describes the unique paraffin-actuated reusable tie-down and release mechanism employed in the system.
Large, R.R.; Danyushevsky, L.; Hollit, C.; Maslennikov, V.; Meffre, S.; Gilbert, S.; Bull, S.; Scott, R.; Emsbo, P.; Thomas, H.; Singh, B.; Foster, J.
2009-01-01
Laser ablation ICP-MS imaging of gold and other trace elements in pyrite from four different sediment- hosted gold-arsenic deposits has revealed two distinct episodes of gold enrichment in each deposit: an early synsedimentary stage where invisible gold is concentrated in arsenian diagenetic pyrite along with other trace elements, in particular, As, Ni, Pb, Zn, Ag, Mo, Te, V, and Se; and a later hydrothermal stage where gold forms as either free gold grains in cracks in overgrowth metamorphic and/or hydrothermal pyrite or as narrow gold- arsenic rims on the outermost parts of the overgrowth hydrothermal pyrite. Compared to the diagenetic pyrites, the hydrothermal pyrites are commonly depleted in Ni, V, Zn, Pb, and Ag with cyclic zones of Co, Ni, and As concentration. The outermost hydrothermal pyrite rims are either As-Au rich, as in moderate- to high- grade deposits such as Carlin and Bendigo, or Co-Ni rich and As-Au poor as in moderate- to low-grade deposits such as Sukhoi Log and Spanish Mountain. The early enrichment of gold in arsenic-bearing syngenetic to diagenetic pyrite, within black shale facies of sedimentary basins, is proposed as a critical requirement for the later development of Carlin-style and orogenic gold deposits in sedimentary environments. The best grade sediment-hosted deposits appear to have the gold climax event, toward the final stages of deformation-related hydrothermal pyrite growth and fluid flow. ?? 2009 Society of Economic Geologists, Inc.
Origin of iron meteorite groups IAB and IIICD
NASA Technical Reports Server (NTRS)
Wasson, J. T.; Willis, J.; Wai, C. M.; Kracher, A.
1980-01-01
Several low Ni-iron meteorites previously classified with group IAB are reclassified with group IIICD because of lower Ge, Ga, W, and Ir concentrations and higher As concentrations. The low Ni extreme of IIICD is now 62 mg/g, and that of IAB is 64 mg/g. It is proposed that the meteorites of both groups formed as individual shock melts on a chondritic parent body. The differences in log element-log Ni slopes of the daughter irons demonstrate that there were detailed differences in the composition and size of phases in the parental material (e.g., more Ni in the sulfides or metal of IAB, or more Ge and Ir in the oxides of IIICD).
Danskin, Wesley R.
2012-01-01
Local water agencies and the United States Geological Survey are using a combination of techniques to better understand the scant freshwater resources and the much more abundant brackish resources in coastal San Diego, California, USA. Techniques include installation of multiple-depth monitoring well sites; geologic and paleontological analysis of drill cuttings; geophysical logging to identify formations and possible seawater intrusion; sampling of pore-water obtained from cores; analysis of chemical constituents including trace elements and isotopes; and use of scoping models including a three-dimensional geologic framework model, rainfall-runoff model, regional groundwater flow model, and coastal density-dependent groundwater flow model. Results show that most fresh groundwater was recharged during the last glacial period and that the coastal aquifer has had recurring intrusions of fresh and saline water. These intrusions disguise the source, flowpaths, and history of ground water near the coast. The flow system includes a freshwater lens resting on brackish water; a 100-meter-thick flowtube of freshwater discharging under brackish estuarine water and above highly saline water; and broad areas of fine-grained coastal sediment filled with fairly uniform brackish water. Stable isotopes of hydrogen and oxygen indicate the recharged water flows through many kilometers of fractured crystalline rock before entering the narrow coastal aquifer.
Vera-Avila, Luz E; Rojo-Portillo, Tania; Covarrubias-Herrera, Rosario; Peña-Alvarez, Araceli
2013-12-17
Dispersive liquid-liquid microextraction with solidification of floating organic drop (DLLME-SFO) is one of the most interesting sample preparation techniques developed in recent years. Although several applications have been reported, the potentiality and limitations of this simple and rapid extraction technique have not been made sufficiently explicit. In this work, the extraction efficiency of DLLME-SFO for pollutants from different chemical families was determined. Studied compounds include: 10 polycyclic aromatic hydrocarbons, 5 pesticides (chlorophenoxy herbicides and DDT), 8 phenols and 6 sulfonamides, thus, covering a large range of polarity and hydrophobicity (LogKow 0-7, overall). After optimization of extraction conditions using 1-dodecanol as extractant, the procedure was applied for extraction of each family from 10-mL spiked water samples, only adjusting sample pH as required. Absolute recoveries for pollutants with LogKow 3-7 were >70% and recovery values within this group (18 compounds) were independent of structure or hydrophobicity; the precision of recovery was very acceptable (RSD<12%) and linear behavior was observed in the studied concentration range (r(2)>0.995). Extraction recoveries for pollutants with LogKow 1.46-2.8 were in the range 13-62%, directly depending on individual LogKow values; however, good linearity (r(2)>0.993) and precision (RSD<6.5%) were also demonstrated for these polar solutes, despite recovery level. DLLME-SFO with 1-dodecanol completely failed for extraction of compounds with LogKow≤1 (sulfa drugs), other more polar extraction solvents (ionic liquids) should be explored for highly hydrophilic pollutants. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Aliouane, Leila; Ouadfeul, Sid-Ali; Rabhi, Abdessalem; Rouina, Fouzi; Benaissa, Zahia; Boudella, Amar
2013-04-01
The main goal of this work is to realize a comparison between two lithofacies segmentation techniques of reservoir interval. The first one is based on the Kohonen's Self-Organizing Map neural network machine. The second technique is based on the Walsh transform decomposition. Application to real well-logs data of two boreholes located in the Algerian Sahara shows that the Self-organizing map is able to provide more lithological details that the obtained lithofacies model given by the Walsh decomposition. Keywords: Comparison, Lithofacies, SOM, Walsh References: 1)Aliouane, L., Ouadfeul, S., Boudella, A., 2011, Fractal analysis based on the continuous wavelet transform and lithofacies classification from well-logs data using the self-organizing map neural network, Arabian Journal of geosciences, doi: 10.1007/s12517-011-0459-4 2) Aliouane, L., Ouadfeul, S., Djarfour, N., Boudella, A., 2012, Petrophysical Parameters Estimation from Well-Logs Data Using Multilayer Perceptron and Radial Basis Function Neural Networks, Lecture Notes in Computer Science Volume 7667, 2012, pp 730-736, doi : 10.1007/978-3-642-34500-5_86 3)Ouadfeul, S. and Aliouane., L., 2011, Multifractal analysis revisited by the continuous wavelet transform applied in lithofacies segmentation from well-logs data, International journal of applied physics and mathematics, Vol01 N01. 4) Ouadfeul, S., Aliouane, L., 2012, Lithofacies Classification Using the Multilayer Perceptron and the Self-organizing Neural Networks, Lecture Notes in Computer Science Volume 7667, 2012, pp 737-744, doi : 10.1007/978-3-642-34500-5_87 5) Weisstein, Eric W. "Fast Walsh Transform." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/FastWalshTransform.html
Contemporary surgical trends in the management of upper tract calculi.
Oberlin, Daniel T; Flum, Andrew S; Bachrach, Laurie; Matulewicz, Richard S; Flury, Sarah C
2015-03-01
Upper tract nephrolithiasis is a common surgical condition that is treated with multiple surgical techniques, including shock wave lithotripsy, ureteroscopy and percutaneous nephrolithotomy. We analyzed case logs submitted to the ABU by candidates for initial certification and recertification to help elucidate the trends in management of upper tract urinary calculi. Annualized case logs from 2003 to 2012 were analyzed. We used logistic regression models to assess how surgeon specific attributes affected the way that upper tract stones were treated. Cases were identified by the CPT code of the corresponding procedure. A total of 6,620 urologists in 3 certification groups recorded case logs, including 2,275 for initial certification, 2,381 for first recertification and 1,964 for second recertification. A total of 441,162 procedures were logged, of which 54.2% were ureteroscopy, 41.3% were shock wave lithotripsy and 4.5% were percutaneous nephrolithotomy. From 2003 to 2013 there was an increase in ureteroscopy from 40.9% to 59.6% and a corresponding decrease in shock wave lithotripsy from 54% to 36.3%. For new urologists ureteroscopy increased from 47.6% to 70.9% of all stones cases logged and for senior clinicians ureteroscopy increased from 40% to 55%. Endourologists performed a significantly higher proportion of percutaneous nephrolithotomies than nonendourologists (10.6% vs 3.69%, p <0.0001) and a significantly smaller proportion of shock wave lithotripsies (34.2% vs 42.2%, p = 0.001). Junior and senior clinicians showed a dramatic adoption of endoscopic techniques. Treatment of upper tract calculi is an evolving field and provider specific attributes affect how these stones are treated. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Publications - GMC 390 | Alaska Division of Geological & Geophysical
Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska DGGS GMC 390 Publication Details Title: Drill logs (1987) from the Cominco Upper Discovery DDH-1 and Lower Discovery DDH-1 through DDH-5 boreholes, Mt. Estelle Prospect, Tyonek Quadrangle, Alaska Authors
Consequence Management Symposium
2001-09-01
AND SUBTITLE Consequence Management Symposium 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...log i cal agents and their effects was deemed essen tial for “first respond ers,” includ ing emer gency medi cal and hospi tal prac ti tio ners
Planetary Nebula Abundances and Morphology: Probing the Chemical Evolution of the Milky Way
NASA Astrophysics Data System (ADS)
Stanghellini, Letizia; Guerrero, Martín Antonio; Cunha, Katia; Manchado, Arturo; Villaver, Eva
2006-11-01
This paper presents a homogeneous study of abundances in a sample of 79 northern Galactic planetary nebulae (PNe) whose morphological classes have been uniformly determined. Ionic abundances and plasma diagnostics were derived from selected optical line strengths in the literature, and elemental abundances were estimated with the ionization correction factor developed by Kingsbourgh & Barlow in 1994. We compare the elemental abundances to the final yields obtained from stellar evolution models of low- and intermediate-mass stars, and we confirm that most bipolar PNe have high nitrogen and helium abundance and are the likely progeny of stars with main-sequence mass greater than 3 Msolar. We derive =0.27 and discuss the implication of such a high ratio in connection with the solar neon abundance. We determine the Galactic gradients of oxygen and neon and found Δlog(O/H)/ΔR=-0.01 dex kpc-1 and Δlog(Ne/H)/ΔR=-0.01 dex kpc-1. These flat PN gradients are irreconcilable with Galactic metallicity gradients flattening with time.
NASA Astrophysics Data System (ADS)
Shao, Xupeng
2017-04-01
Glutenite bodies are widely developed in northern Minfeng zone of Dongying Sag. Their litho-electric relationship is not clear. In addition, as the conventional sequence stratigraphic research method drawbacks of involving too many subjective human factors, it has limited deepening of the regional sequence stratigraphic research. The wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data have advantages of dividing sequence stratigraphy quantitatively comparing with the conventional methods. Under the basis of the conventional sequence research method, this paper used the above techniques to divide the fourth-order sequence of the upper Es4 in northern Minfeng zone of Dongying Sag. The research shows that the wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data are essentially consistent, both of which divide sequence stratigraphy quantitatively in the frequency domain; wavelet transform technique has high resolutions. It is suitable for areas with wells. The seismic time-frequency analysis technique has wide applicability, but a low resolution. Both of the techniques should be combined; the upper Es4 in northern Minfeng zone of Dongying Sag is a complete set of third-order sequence, which can be further subdivided into 5 fourth-order sequences that has the depositional characteristics of fine-upward sequence in granularity. Key words: Dongying sag, northern Minfeng zone, wavelet transform technique, time-frequency analysis technique ,the upper Es4, sequence stratigraphy
Hexahedral finite element mesh coarsening using pillowing technique
Staten, Matthew L [Pittsburgh, PA; Woodbury, Adam C [Provo, UT; Benzley, Steven E [Provo, UT; Shepherd, Jason F [Edgewood, NM
2012-06-05
A techniques for coarsening a hexahedral mesh is described. The technique includes identifying a coarsening region within a hexahedral mesh to be coarsened. A boundary sheet of hexahedral elements is inserted into the hexahedral mesh around the coarsening region. A column of hexahedral elements is identified within the boundary sheet. The column of hexahedral elements is collapsed to create an extraction sheet of hexahedral elements contained within the coarsening region. Then, the extraction sheet of hexahedral elements is extracted to coarsen the hexahedral mesh.
Extragalactic counterparts to Einstein slew survey sources
NASA Technical Reports Server (NTRS)
Schachter, Jonathan F.; Elvis, Martin; Plummer, David; Remillard, Ron
1992-01-01
The Einstein slew survey consists of 819 bright X-ray sources, of which 636 (or 78 percent) are identified with counterparts in standard catalogs. The importance of bright X-ray surveys is stressed, and the slew survey is compared to the Rosat all sky survey. Statistical techniques for minimizing confusion in arcminute error circles in digitized data are discussed. The 238 slew survey active galactic nuclei, clusters, and BL Lacertae objects identified to date and their implications for logN-logS and source evolution studies are described.
Honda, Masayuki; Matsumoto, Takehiro
2017-01-01
Several kinds of event log data produced in daily clinical activities have yet to be used for secure and efficient improvement of hospital activities. Data Warehouse systems in Hospital Information Systems used for the analysis of structured data such as disease, lab-tests, and medications, have also shown efficient outcomes. This article is focused on two kinds of essential functions: process mining using log data and non-structured data analysis via Natural Language Processing.
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
John W. Bramhall
1989-01-01
In the 1950s, timber on steep granitic terrain in Trinity County, California was harvested by using the logging techniques of the time. After Trinity Dam was built in the 1960s, it became evident these techniques were not suited to quality riparian habitat and healthy anadromous fisheries. Since adoption of the Z'berg-Nejedly Forest Practice Act in 1973, efforts...
Fuzzy inference system for identification of geological stratigraphy off Prydz Bay, East Antarctica
NASA Astrophysics Data System (ADS)
Singh, Upendra K.
2011-12-01
The analysis of well logging data plays key role in the exploration and development of hydrocarbon reservoirs. Various well log parameters such as porosity, gamma ray, density, transit time and resistivity, help in classification of strata and estimation of the physical, electrical and acoustical properties of the subsurface lithology. Strong and conspicuous changes in some of the log parameters associated with any particular geological stratigraphy formation are function of its composition, physical properties that help in classification. However some substrata show moderate values in respective log parameters and make difficult to identify the kind of strata, if we go by the standard variability ranges of any log parameters and visual inspection. The complexity increases further with more number of sensors involved. An attempt is made to identify the kinds of stratigraphy from well logs over Prydz bay basin, East Antarctica using fuzzy inference system. A model is built based on few data sets of known stratigraphy and further the network model is used as test model to infer the lithology of a borehole from their geophysical logs, not used in simulation. Initially the fuzzy based algorithm is trained, validated and tested on well log data and finally identifies the formation lithology of a hydrocarbon reservoir system of study area. The effectiveness of this technique is demonstrated by the analysis of the results for actual lithologs and coring data of ODP Leg 188. The fuzzy results show that the training performance equals to 82.95% while the prediction ability is 87.69%. The fuzzy results are very encouraging and the model is able to decipher even thin layer seams and other strata from geophysical logs. The result provides the significant sand formation of depth range 316.0- 341.0 m, where core recovery is incomplete.
Abundance Analysis of the Helium Weak Star 20-TAURI
NASA Astrophysics Data System (ADS)
Mon, M.; Hirata, R.; Sadakane, K.
An abundance analysis of the helium-weak star 20 Tauri is performed with a fully line-blanketed model atmosphere. The adopted atmospheric parameters are Teff =12600 K and log g=3.2. These values are lower by about 1000 K in Teff and 0.3 in log g than those used in previous investigations, and 20 Tau is the coolest star among the group of helium-weak star. A value of log N(He)/N(H)=-1.7 is found from the average of six He I lines. Mg, Si, Ca, and Ni are underabundant, while P and Mn are overabundant. The abundances of C, Ti, Cr, and Fe coincide with the solar values within ±0.3 dex. Upper limits of the abundances of S, Sc, and Sr are estimated and these elements are not overabundant. The observed abundance pattern in 20 Tau is quite different from those in other helium-weak stars, while it shows a mild characteristic of Mn-Hg stars.
NASA Astrophysics Data System (ADS)
Dai, Qianwei; Lin, Fangpeng; Wang, Xiaoping; Feng, Deshan; Bayless, Richard C.
2017-05-01
An integrated geophysical investigation was performed at S dam located at Dadu basin in China to assess the condition of the dam curtain. The key methodology of the integrated technique used was flow-field fitting method, which allowed identification of the hydraulic connections between the dam foundation and surface water sources (upstream and downstream), and location of the anomalous leakage outlets in the dam foundation. Limitations of the flow-field fitting method were complemented with resistivity logging to identify the internal erosion which had not yet developed into seepage pathways. The results of the flow-field fitting method and resistivity logging were consistent when compared with data provided by seismic tomography, borehole television, water injection test, and rock quality designation.
NASA Technical Reports Server (NTRS)
Rowland, Rick, II; Vander Kaaden, Kathleen E.; McCubbin, Francis M.; Danielson, Lisa R.
2017-01-01
With the data returned from the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (lvtESSENGER) mission, there are now numerous constraints on the physical and chemical properties of Mercury, including its surface composition. The high S and low FeO contents observed from MESSENGER suggest a low oxygen fugacity of the present materials on the planet's surface. Most of our understanding of elemental partitioning behavior comes from observations made on terrestrial rocks, but Mercury's oxygen fugacity is far outside the conditions of those samples, estimated at approximately 3-7 log units below the Iron-Wustite (lW) oxygen buffer, several orders of magnitude more reducing than other terrestrial bodies we have data from. With limited oxygen available, lithophile elements may instead exhibit chalcophile, halophile, or siderophile behaviors. Furthermore, very few natural samples of rocks that formed under reducing conditions (e.g., enstatite chondrites, achondrites, aubrites) are available in our collections for examination of this change in geochemical affinity. Our goal is to determine the elemental partitioning behavior of typically lithophile elements at lower oxygen fugacity as a function of temperature and pressure. Experiments were conducted at I GPa in a 13 mm QUICKpress piston cylinder and at 4 GPa in an 880-ton multi-anvil press, at temperatures up to 1850 C. The composition of starting materials for the experiments were designed so the final run products contained metal, silicate melt, and sulfide melt phases. Oxygen fugacity was controlled in the experiments by adding silicon metal to the samples, in order to utilize the Si-Si02 buffer, which is approx. 5 log units more reducing than the IW buffer at our temperatures of interest. The target silicate melt composition was diopside (CaMgSi206) because measured surface compositions indicate partial melting of a pyroxene-rich mantle. The results of our experiments will aid in our understanding of the fate of elements during the differentiation and thermal evolution of Mercury and other highly reducing planetary bodies.
NASA Technical Reports Server (NTRS)
Rowland, Rick, II; Vander Kaaden, Kathleen E.; McCubbin, Francis M.; Danielson, Lisa R.
2017-01-01
With the data returned from the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission, there are now numerous constraints on the physical and chemical properties of Mercury, including its surface composition. The high Sand low FeO contents observed from MESSENGER suggest a low oxygen fugacity of the present materials on the planet's surface. Most of our understanding of elemental partitioning behavior comes from observations made on terrestrial rocks, but Mercury's oxygen fugacity is far outside the conditions of those samples, estimated at approximately 3-7 log units below the Iron-Wtistite (lW) oxygen buffer, several orders of magnitude more reducing than other terrestrial bodies we have data from. With limited oxygen available, lithophile elements may instead exhibit chalcophile, halophile, or siderophile behaviors. Furthermore, very few natural samples of rocks that formed under reducing conditions (e.g., enstatite chondrites, achondrites, aubrites) are available in our collections for examination of this change in geochemical affinity. Our goal is to determine the elemental partitioning behavior of typically lithophile elements at lower oxygen fugacity as a function of temperature and pressure. Experiments were conducted at I GPa in a 13 mm QUICKpress piston cylinder and at 4 GPa in an 880-ton multianvil press, at temperatures up to 1850degC. The composition of starting materials for the experiments were designed so the final run products contained metal, silicate melt, and sulfide melt phases. Oxygen fugacity was controlled in the experiments by adding silicon metal to the samples, in order to utilize the Si-Si02 buffer, which is approximately 5 log units more reducing than the IW buffer at our temperatures of interest. The target silicate melt composition was diopside (CaMgSi206) because measured surface compositions indicate partial melting of a pyroxene-rich mantle. The results of our experiments will aid in our understanding of the fate of elements during the differentiation and thermal evolution of Mercury and other highly reducing planetary bodies.
NASA Astrophysics Data System (ADS)
Mansouri, E.; Feizi, F.; Karbalaei Ramezanali, A. A.
2015-10-01
Ground magnetic anomaly separation using the reduction-to-the-pole (RTP) technique and the fractal concentration-area (C-A) method has been applied to the Qoja-Kandi prospecting area in northwestern Iran. The geophysical survey resulting in the ground magnetic data was conducted for magnetic element exploration. Firstly, the RTP technique was applied to recognize underground magnetic anomalies. RTP anomalies were classified into different populations based on the current method. For this reason, drilling point area determination by the RTP technique was complicated for magnetic anomalies, which are in the center and north of the studied area. Next, the C-A method was applied to the RTP magnetic anomalies (RTP-MA) to demonstrate magnetic susceptibility concentrations. This identification was appropriate for increasing the resolution of the drilling point area determination and decreasing the drilling risk issue, due to the economic costs of underground prospecting. In this study, the results of C-A modelling on the RTP-MA are compared with 8 borehole data. The results show that there is a good correlation between anomalies derived via the C-A method and the log report of boreholes. Two boreholes were drilled in magnetic susceptibility concentrations, based on multifractal modelling data analyses, between 63 533.1 and 66 296 nT. Drilling results showed appropriate magnetite thickness with grades greater than 20 % Fe. The total associated with anomalies containing andesite units hosts iron mineralization.
LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.
Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin
2014-12-01
The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
Generalized image contrast enhancement technique based on Heinemann contrast discrimination model
NASA Astrophysics Data System (ADS)
Liu, Hong; Nodine, Calvin F.
1994-03-01
This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.
Evaluation of forest management practices through application of a biogeochemical model, PnET-BGC
NASA Astrophysics Data System (ADS)
Valipour, M.; Driscoll, C. T.; Johnson, C. E.; Campbell, J. L.; Fahey, T.; Zeng, T.
2017-12-01
Forest ecosystem response to logging disturbance varies significantly, depending on site conditions, species composition, land use history, and the method and frequency of harvesting. The long-term effects of forest cuttings are less clear due to limited information on land use history and long-term time series observations. The hydrochemical model, PnET-BGC was modified and verified using field data from multiple experimentally harvested northern hardwood watersheds at the Hubbard Brook Experimental Forest (HBEF), New Hampshire, USA, including a commercial whole-tree harvest (Watershed 5), a devegetation experiment (Watershed 2; devegetation and herbicide treatment), a commercial strip-cut (Watershed 4) to simulate the hydrology, biomass accumulation, and soil solution and stream water chemistry responses to clear-cutting. The confirmed model was used to investigate temporal changes in aboveground biomass accumulation and nutrient dynamics under three different harvesting intensities (40%, 60%, 80%) over four varied rotation lengths (20, 40, 60, 80 years) with results compared with a scenario of no forest harvesting. The total ecosystem carbon pool (biomass, soil and litter) was reduced over harvesting events. The greatest decline occurred in litter by 40%-70%, while the pool of carbon stored in aboveground biomass decreased by 30%-60% for 80% cutting levels at 40 and 20 year rotation lengths, respectively. The large pool of soil organic carbon remained relatively stable, with only minor declines over logging regimes. Stream water simulations demonstrated increased loss of major elements over cutting events. Ca+2 and NO3- were the most sensitive elements to leaching over frequent intensive logging. Accumulated leaching of Ca+2 and NO3- varied between 90-520 t Ca/ha and 40-420 t N/ha from conservative (80-year period and 40% cutting) to aggressive (20-year period and 80% cutting) cutting regimes, respectively. Moreover, a reduction in nutrient plant uptake over logging scenarios was estimated. Model simulations indicated nutrient losses were more sensitive to harvesting rotation length than intensity.
Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David
2014-07-01
To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunter, Dan; Lee, Jason; Stoufer, Martin
2003-03-28
The NetLogger Toolkit is designed to monitor, under actual operating conditions, the behavior of all the elements of the application-to-application communication path in order to determine exactly where time is spent within a complex system Using NetLogger, distnbuted application components are modified to produce timestamped logs of "interesting" events at all the critical points of the distributed system Events from each component are correlated, which allov^ one to characterize the performance of all aspects of the system and network in detail. The NetLogger Toolkit itself consists of four components an API and library of functions to simplify the generation ofmore » application-level event logs, a set of tools for collecting and sorting log files, an event archive system, and a tool for visualization and analysis of the log files In order to instrument an application to produce event logs, the application developer inserts calls to the NetLogger API at all the critical points in the code, then links the application with the NetLogger library All the tools in the NetLogger Toolkit share a common log format, and assume the existence of accurate and synchronized system clocks NetLogger messages can be logged using an easy-to-read text based format based on the lETF-proposed ULM format, or a binary format that can still be used through the same API but that is several times faster and smaller, with performance comparable or better than binary message formats such as MPI, XDR, SDDF-Binary, and PBIO. The NetLogger binary format is both highly efficient and self-describing, thus optimized for the dynamic message construction and parsing of application instrumentation. NetLogger includes an "activation" API that allows NetLogger logging to be turned on, off, or modified by changing an external file This IS useful for activating logging in daemons/services (e g GndFTP server). The NetLogger reliability API provides the ability to specify backup logging locations and penodically try to reconnect broken TCP pipe. A typical use for this is to store data on local disk while net is down. An event archiver can log one or more incoming NetLogger streams to a local disk file (netlogd) or to a mySQL database (netarchd). We have found exploratory, visual analysis of the log event data to be the most useful means of determining the causes of performance anomalies The NetLogger Visualization tool, niv, has been developed to provide a flexible and interactive graphical representation of system-level and application-level events.« less
A Census of X-Ray Gas in NGC 1068: Results from 450ks of Chandra HETG Observations
NASA Technical Reports Server (NTRS)
Kallman, T.; Evans, Daniel A.; Marshall, H.; Canizares, C.; Longinotti, A.; Nowak, M.; Schulz, N.
2013-01-01
We present models for the X-ray spectrum of the Seyfert 2 galaxy NGC 1068. These are fitted to data obtained using the High Energy Transmission Grating (HETG) on the Chandra X-ray observatory. The data show line and radiative recombination continuum (RRC) emission from a broad range of ions and elements. The models explore the importance of excitation processes for these lines including photoionization followed by recombination, radiative excitation by absorption of continuum radiation and inner shell fluorescence. The models show that the relative importance of these processes depends on the conditions in the emitting gas, and that no single emitting component can fit the entire spectrum. In particular, the relative importance of radiative excitation and photoionization/recombination differs according to the element and ion stage emitting the line. This in turn implies a diversity of values for the ionization parameter of the various components of gas responsible for the emission, ranging from log(Epsilon)=1 - 3. Using this, we obtain an estimate for the total amount of gas responsible for the observed emission. The mass flux through the region included in the HETG extraction region is approximately 0.3 Solar Mass/yr assuming ordered flow at the speed characterizing the line widths. This can be compared with what is known about this object from other techniques.
NASA Technical Reports Server (NTRS)
Kallman, T.; Evans, Daniel A.; Marshall, H.; Canizares, C.; Longinotti, A.; Nowak, M.; Schulz, N.
2013-01-01
We present models for the X-ray spectrum of the Seyfert 2 galaxy NGC 1068. These are fitted to data obtained using the High Energy Transmission Grating on Chandra. The data show line and radiative recombination continuum emission from a broad range of ions and elements. The models explore the importance of excitation processes for these lines including photoionization followed by recombination, radiative excitation by absorption of continuum radiation, and inner shell fluorescence. The models show that the relative importance of these processes depends on the conditions in the emitting gas and that no single emitting component can fit the entire spectrum. In particular, the relative importance of radiative excitation and photoionization/recombination differs according to the element and ion stage emitting the line. This in turn implies a diversity of values for the ionization parameter of the various components of gas responsible for the emission, ranging from log(E ) = 1 to 3. Using this, we obtain an estimate for the total amount of gas responsible for the observed emission. The mass flux through the region included in the HETG extraction region is approximately 0.3M/yr, assuming ordered flow at the speed characterizing the line widths. This can be compared with what is known about this object from other techniques.
A Census of X-ray gas in NGC 1068: Results from 450ks of Chandra HETG Observations.
Kallman, T; Evans, Daniel A; Marshall, H; Canizares, C; Longinotti, A; Nowak, M; Schulz, N
2014-01-10
We present models for the X-ray spectrum of the Seyfert 2 galaxy NGC 1068. These are fitted to data obtained using the High Energy Transmission Grating (HETG) on the Chandra X-ray observatory. The data show line and radiative recombination continuum (RRC) emission from a broad range of ions and elements. The models explore the importance of excitation processes for these lines including photoionization followed by recombination, radiative excitation by absorption of continuum radiation and inner shell fluorescence. The models show that the relative importance of these processes depends on the conditions in the emitting gas, and that no single emitting component can fit the entire spectrum. In particular, the relative importance of radiative excitation and photoionization/recombination differs according to the element and ion stage emitting the line. This in turn implies a diversity of values for the ionization parameter of the various components of gas responsible for the emission, ranging from log(ξ)=1 - 3. Using this, we obtain an estimate for the total amount of gas responsible for the observed emission. The mass flux through the region included in the HETG extraction region is approximately 0.3 M ⊙ yr -1 assuming ordered flow at the speed characterizing the line widths. This can be compared with what is known about this object from other techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monson, L.M.; Lund, D.F.
1991-06-01
Five shallow gas-bearing Cretaceous intervals have been identified on the Fort Peck Reservation of northeastern Montana. They include the Lower Judith River Sandstone and shaly sandstone intervals in the Gammon, Niobrara, Greenhorn, and Mowry Formations, Stratigraphic correlations have been carried from southwestern Saskatchewan through the Bowdoin gas field to the reservation. Sparse yet widely distributed gas shows confirm this relatively untested resource. Each of these gas-bearing intervals belongs to a recognized stratigraphic cycle characterized by thick shales overlain by progradational shaly sandstones and siltstones. The bottom cycle (Skull Creek to Mowry) contains considerable nonmarine deposits, especially within the Muddy Sandstonemore » interval, which is thickly developed in the eastern part of the reservation as a large valley-fill network. Some individual sandstone units are not continuous across the reservation. These, and those that correlate, appear to be related to paleotectonic features defined by northwest-trending lineament zones, and by lineament zone intersections. Northeast-trending paleotectonic elements exert secondary influence on stratigraphic isopachs. Circular tectonic elements, which carry through to basement, also have anomalous stratigraphic expression. Conventional drilling has not been conducive to properly testing the Cretaceous gas potential on the reservation, but empirical well-log analysis suggests that gas can be identified by various crossover techniques. The Judith River Formation did produce gas for field use at East Poplar.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kristensen, L.; Dons, T.; Schioler, P.
1995-11-01
Correlation of wireline log data from the North Sea chalk reservoirs is frequently hampered by rather subtle log patterns in the chalk section due to the apparent monotonous nature of the chalk sediments, which may lead to ambiguous correlations. This study deals with a correlation technique based on an integration of biostratigraphic data, seismic interpretation, and wireline log correlation; this technique aims at producing a consistent reservoir subdivision that honors both the well data and the seismic data. This multidisciplinary approach has been used to subdivide and correlate the Maastrichtian chalk in the Dan field. The biostratigraphic subdivision is basedmore » on a new detailed dinoflagellate study of core samples from eight wells. Integrating the biostratigraphic results with three-dimensional seismic data allows recognition of four stratigraphic units within the Maastrichtian, bounded by assumed chronostratigraphic horizons. This subdivision is further refined by adding a seismic horizon and four horizons from wireline log correlations, establishing a total of nine reservoir units. The approximate chronostratigraphic nature of these units provides an improved interpretation of the depositional and structural patterns in this area. The three upper reservoir units pinch out and disappear in a northeasterly direction across the field. We interpret this stratal pattern as reflecting a relative sea level fall or regional basinal subsidence during the latest Maastrichtian, possibly combined with local synsedimentary uplift due to salt tectonics. Isochore maps indicate that the underlying six non-wedging units are unaffected by salt tectonics.« less
Development of Enabling Scientific Tools to Characterize the Geologic Subsurface at Hanford
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenna, Timothy C.; Herron, Michael M.
2014-07-08
This final report to the Department of Energy provides a summary of activities conducted under our exploratory grant, funded through U.S. DOE Subsurface Biogeochemical Research Program in the category of enabling scientific tools, which covers the period from July 15, 2010 to July 14, 2013. The main goal of this exploratory project is to determine the parameters necessary to translate existing borehole log data into reservoir properties following scientifically sound petrophysical relationships. For this study, we focused on samples and Ge-based spectral gamma logging system (SGLS) data collected from wells located in the Hanford 300 Area. The main activities consistedmore » of 1) the analysis of available core samples for a variety of mineralogical, chemical and physical; 2) evaluation of selected spectral gamma logs, environmental corrections, and calibration; 3) development of algorithms and a proposed workflow that permits translation of log responses into useful reservoir properties such as lithology, matrix density, porosity, and permeability. These techniques have been successfully employed in the petroleum industry; however, the approach is relatively new when applied to subsurface remediation. This exploratory project has been successful in meeting its stated objectives. We have demonstrated that our approach can lead to an improved interpretation of existing well log data. The algorithms we developed can utilize available log data, in particular gamma, and spectral gamma logs, and continued optimization will improve their application to ERSP goals of understanding subsurface properties.« less
TH-AB-201-12: Using Machine Log-Files for Treatment Planning and Delivery QA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanhope, C; Liang, J; Drake, D
2016-06-15
Purpose: To determine the segment reduction and dose resolution necessary for machine log-files to effectively replace current phantom-based patient-specific quality assurance, while minimizing computational cost. Methods: Elekta’s Log File Convertor R3.2 records linac delivery parameters (dose rate, gantry angle, leaf position) every 40ms. Five VMAT plans [4 H&N, 1 Pulsed Brain] comprised of 2 arcs each were delivered on the ArcCHECK phantom. Log-files were reconstructed in Pinnacle on the phantom geometry using 1/2/3/4° control point spacing and 2/3/4mm dose grid resolution. Reconstruction effectiveness was quantified by comparing 2%/2mm gamma passing rates of the original and log-file plans. Modulation complexity scoresmore » (MCS) were calculated for each beam to correlate reconstruction accuracy and beam modulation. Percent error in absolute dose for each plan-pair combination (log-file vs. ArcCHECK, original vs. ArcCHECK, log-file vs. original) was calculated for each arc and every diode greater than 10% of the maximum measured dose (per beam). Comparing standard deviations of the three plan-pair distributions, relative noise of the ArcCHECK and log-file systems was elucidated. Results: The original plans exhibit a mean passing rate of 95.1±1.3%. The eight more modulated H&N arcs [MCS=0.088±0.014] and two less modulated brain arcs [MCS=0.291±0.004] yielded log-file pass rates most similar to the original plan when using 1°/2mm [0.05%±1.3% lower] and 2°/3mm [0.35±0.64% higher] log-file reconstructions respectively. Log-file and original plans displayed percent diode dose errors 4.29±6.27% and 3.61±6.57% higher than measurement. Excluding the phantom eliminates diode miscalibration and setup errors; log-file dose errors were 0.72±3.06% higher than the original plans – significantly less noisy. Conclusion: For log-file reconstructed VMAT arcs, 1° control point spacing and 2mm dose resolution is recommended, however, less modulated arcs may allow less stringent reconstructions. Following the aforementioned reconstruction recommendations, the log-file technique is capable of detecting delivery errors with equivalent accuracy and less noise than ArcCHECK QA. I am funded by an Elekta Research Grant.« less
Active Neutron and Gamma Ray Instrumentation for In Situ Planetary Science Applications
NASA Technical Reports Server (NTRS)
Parsons, A.; Bodnarik, J.; Evans, L.; Floyd, S.; Lim, L.; McClanahan, T.; Namkung, M.; Schweitzer, J.; Starr, R.; Trombka, J.
2010-01-01
The Pulsed Neutron Generator-Gamma Ray And Neutron Detectors (PNG-GRAND) experiment is an innovative application of the active neutron-gamma ray technology so successfully used in oil field well logging and mineral exploration on Earth. The objective of our active neutron-gamma ray technology program at NASA Goddard Space Flight Center (NASA-GSFC) is to bring the PNG-GRAND instrument to the point where it can be flown on a variety of surface lander or rover missions to the Moon, Mars, Menus, asteroids, comets and the satellites of the outer planets. Gamma-Ray Spectrometers (GRS) have been incorporated into numerous orbital planetary science missions and, especially its the case of the Mars Odyssey GRS, have contributed detailed maps of the elemental composition over the entire surface of Mars. However, orbital gamma ray measurements have low spatial sensitivity (100's of km) due to their low surface emission rates from cosmic rays and subsequent need to be averaged over large surface areas. PNG-GRAND overcomes this impediment by incorporating a powerful neutron excitation source that permits high sensitivity surface and subsurface measurements of bulk elemental compositions. PNG-GRAND combines a pulsed neutron generator (PNG) with gamma ray and neutron detectors to produce a landed instrument to determine subsurface elemental composition without needing to drill into a planet's surface a great advantage in mission design. We are currently testing PNG-GRAND prototypes at a unique outdoor neutron instrumentation test facility recently constructed at NASA/GSFC that consists of a 2 m x 2 in x 1 m granite structure placed outdoors in an empty field. Because an independent trace elemental analysis has been performed on the material, this granite sample is a known standard with which to compare both Monte Carlo simulations and our experimentally measured elemental composition data. We will present data from operating PNG-GRAND in various experimental configurations on a known sample in a geometry that is identical to that on a planetary surface. We will also illustrate the use of gamma ray timing techniques to improve sensitivity and will compare the material composition results from our experiments to both an independent laboratory elemental composition analysis and MCNPX computer modeling results.
Mulware, Stephen Juma
2015-01-01
The properties of many biological materials often depend on the spatial distribution and concentration of the trace elements present in a matrix. Scientists have over the years tried various techniques including classical physical and chemical analyzing techniques each with relative level of accuracy. However, with the development of spatially sensitive submicron beams, the nuclear microprobe techniques using focused proton beams for the elemental analysis of biological materials have yielded significant success. In this paper, the basic principles of the commonly used microprobe techniques of STIM, RBS, and PIXE for trace elemental analysis are discussed. The details for sample preparation, the detection, and data collection and analysis are discussed. Finally, an application of the techniques to analysis of corn roots for elemental distribution and concentration is presented.
[Utilization suitability of forest resources in typical forest zone of Changbai Mountains].
Hao, Zhanqing; Yu, Deyong; Xiong, Zaiping; Ye, Ji
2004-10-01
Conservation of natural forest does not simply equal to no logging. The Northeast China Forest Region has a logging quota of mature forest as part of natural forest conservation project. How to determine the logging spots rationally and scientifically is very important. Recent scientific theories of forest resources management advocate that the utilization of forest resources should stick to the principle of sustaining use, and pay attention to the ecological function of forest resources. According to the logging standards, RS and GIS techniques can be used to detect the precise location of forest resources and obtain information of forest areas and types, and thus, provide more rational and scientific support for space choice about future utilization of forest resources. In this paper, the Lushuihe Forest Bureau was selected as a typical case in Changbai Mountains Forest Region to assess the utilization conditions of forest resources, and some advices on spatial choice for future management of forest resources in the study area were offered.
Aligning observed and modelled behaviour based on workflow decomposition
NASA Astrophysics Data System (ADS)
Wang, Lu; Du, YuYue; Liu, Wei
2017-09-01
When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.
Technology for biomass feedstock production in southern forests and GHG implications
Bob Rummer; John Klepac; Jason Thompson
2012-01-01
Woody biomass production in the South can come from four distinct feedstocks - logging residues, thinnings, understory harvesting, or energywood plantations. A range of new technology has been developed to collect, process and transport biomass and a key element of technology development has been to reduce energy consumption. We examined three different woody feedstock...
Remote sensing application challenges in the Mekong region
Jeffrey Himel
2013-01-01
Forest degradation is not just one of the cornerstones of "REDD+", it is a critical element for Lao PDR and other countries where the primary driver of forest carbon loss is selective logging and small-scale conversion of forest for agriculture rather than deforestation. Unless we can reliably and accurately quantify the area of degradation using remote...
Soil productivity and harvest operations
Deborah Page-Dumroese
2007-01-01
Concern over changes in soil productivity due to forest management is often debated by forest managers and the public. One key element in the discussion is use of mechanized equipment (such as rubber-tired skidders, log forwarders, or tracked vehicles) to remove timber products from the forest. Part of the debate focuses on soil compaction, removal of nutrients when...
Life cycle performances of log wood applied for soil bioengineering constructions
NASA Astrophysics Data System (ADS)
Kalny, Gerda; Strauss-Sieberth, Alexandra; Strauss, Alfred; Rauch, Hans Peter
2016-04-01
Nowadays there is a high demand on engineering solutions considering not only technical aspects but also ecological and aesthetic values. Soil bioengineering is a construction technique that uses biological components for hydraulic and civil engineering solutions. Soil bioengineering solutions are based on the application of living plants and other auxiliary materials including among others log wood. This kind of construction material supports the soil bioengineering system as long as the plants as living construction material overtake the stability function. Therefore it is important to know about the durability and the degradation process of the wooden logs to retain the integral performance of a soil bio engineering system. These aspects will be considered within the framework of the interdisciplinary research project „ELWIRA Plants, wood, steel and concrete - life cycle performances as construction materials". Therefore field investigations on soil bioengineering construction material, specifically European Larch wood logs, of different soil bioengineering structures at the river Wien have been conducted. The drilling resistance as a parameter for particular material characteristics of selected logs was measured and analysed. The drilling resistance was measured with a Rinntech Resistograph instrument at different positions of the wooden logs, all surrounded with three different backfills: Fully surrounded with air, with earth contact on one side and near the water surface in wet-dry conditions. The age of the used logs ranges from one year old up to 20 year old. Results show progress of the drilling resistance throughout the whole cross section as an indicator to assess soil bioengineering construction material. Logs surrounded by air showed a higher drilling resistance than logs with earth contact and the ones exposed to wet-dry conditions. Hence the functional capability of wooden logs were analysed and discussed in terms of different levels of degradation. The results contribute to a sustainable and resource conserving handling with building materials in frame of construction and maintenance works of soil bioengineering structures.
Moore, Katie L; Lombi, Enzo; Zhao, Fang-Jie; Grovenor, Chris R M
2012-04-01
The ability to locate and quantify elemental distributions in plants is crucial to understanding plant metabolisms, the mechanisms of uptake and transport of minerals and how plants cope with toxic elements or elemental deficiencies. High-resolution secondary ion mass spectrometry (SIMS) is emerging as an important technique for the analysis of biological material at the subcellular scale. This article reviews recent work using the CAMECA NanoSIMS to determine elemental distributions in plants. The NanoSIMS is able to map elemental distributions at high resolution, down to 50 nm, and can detect very low concentrations (milligrams per kilogram) for some elements. It is also capable of mapping almost all elements in the periodic table (from hydrogen to uranium) and can distinguish between stable isotopes, which allows the design of tracer experiments. In this review, particular focus is placed upon studying the same or similar specimens with both the NanoSIMS and a wide range of complementary techniques, showing how the advantages of each technique can be combined to provide a fuller data set to address complex scientific questions. Techniques covered include optical microscopy, synchrotron techniques, including X-ray fluorescence and X-ray absorption spectroscopy, transmission electron microscopy, electron probe microanalysis, particle-induced X-ray emission and inductively coupled plasma mass spectrometry. Some of the challenges associated with sample preparation of plant material for SIMS analysis, the artefacts and limitations of the technique and future trends are also discussed.
Correlations between chromatographic parameters and bioactivity predictors of potential herbicides.
Janicka, Małgorzata
2014-08-01
Different liquid chromatography techniques, including reversed-phase liquid chromatography on Purosphere RP-18e, IAM.PC.DD2 and Cosmosil Cholester columns and micellar liqud chromatography with a Purosphere RP-8e column and using buffered sodium dodecyl sulfate-acetonitrile as the mobile phase, were applied to study the lipophilic properties of 15 newly synthesized phenoxyacetic and carbamic acid derivatives, which are potential herbicides. Chromatographic lipophilicity descriptors were used to extrapolate log k parameters (log kw and log km) and log k values. Partitioning lipophilicity descriptors, i.e., log P coefficients in an n-octanol-water system, were computed from the molecular structures of the tested compounds. Bioactivity descriptors, including partition coefficients in a water-plant cuticle system and water-human serum albumin and coefficients for human skin partition and permeation were calculated in silico by ACD/ADME software using the linear solvation energy relationship of Abraham. Principal component analysis was applied to describe similarities between various chromatographic and partitioning lipophilicities. Highly significant, predictive linear relationships were found between chromatographic parameters and bioactivity descriptors. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Finite-difference modeling of the electroseismic logging in a fluid-saturated porous formation
NASA Astrophysics Data System (ADS)
Guan, Wei; Hu, Hengshan
2008-05-01
In a fluid-saturated porous medium, an electromagnetic (EM) wavefield induces an acoustic wavefield due to the electrokinetic effect. A potential geophysical application of this effect is electroseismic (ES) logging, in which the converted acoustic wavefield is received in a fluid-filled borehole to evaluate the parameters of the porous formation around the borehole. In this paper, a finite-difference scheme is proposed to model the ES logging responses to a vertical low frequency electric dipole along the borehole axis. The EM field excited by the electric dipole is calculated separately by finite-difference first, and is considered as a distributed exciting source term in a set of extended Biot's equations for the converted acoustic wavefield in the formation. This set of equations is solved by a modified finite-difference time-domain (FDTD) algorithm that allows for the calculation of dynamic permeability so that it is not restricted to low-frequency poroelastic wave problems. The perfectly matched layer (PML) technique without splitting the fields is applied to truncate the computational region. The simulated ES logging waveforms approximately agree with those obtained by the analytical method. The FDTD algorithm applies also to acoustic logging simulation in porous formations.
Using borehole flow data to characterize the hydraulics of flow paths in operating wellfields
Paillet, F.; Lundy, J.
2004-01-01
Understanding the flow paths in the vicinity of water well intakes is critical in the design of effective wellhead protection strategies for heterogeneous carbonate aquifers. High-resolution flow logs can be combined with geophysical logs and borehole-wall-image logs (acoustic televiewer) to identify the porous beds, solution openings, and fractures serving as conduits connecting the well bore to the aquifer. Qualitative methods of flow log analysis estimate the relative transmissivity of each water-producing zone, but do not indicate how those zones are connected to the far-field aquifer. Borehole flow modeling techniques can be used to provide quantitative estimates of both transmissivity and far-field hydraulic head in each producing zone. These data can be used to infer how the individual zones are connected with each other, and to the surrounding large-scale aquifer. Such information is useful in land-use planning and the design of well intakes to prevent entrainment of contaminants into water-supply systems. Specific examples of flow log applications in the identification of flow paths in operating wellfields are given for sites in Austin and Faribault, Minnesota. Copyright ASCE 2004.
Earthquake models using rate and state friction and fast multipoles
NASA Astrophysics Data System (ADS)
Tullis, T.
2003-04-01
The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.
Font, María; Ardaiz, Elena; Cordeu, Lucia; Cubedo, Elena; García-Foncillas, Jesús; Sanmartin, Carmen; Palop, Juan-Antonio
2006-03-15
In an attempt to discover the essential features that would allow us to explain the differences in cytotoxic activity shown by a series of symmetrical diaryl derivatives with nitrogenated functions, we have studied by molecular modelling techniques the variation in Log P and conformational behaviour, in terms of structural modifications. The Log P data--although they provide few clues concerning the observed variability in activity--suggest that an initial separation of active and inactive compounds is possible based on this parameter. The subsequent study of the conformational behaviour of the compounds, selected according to their Log P values, showed that the active compounds preferentially display an extended conformation and inactive ones are associated with a certain type of folding, with a triangular-type conformation adopted in these cases.
Spectroscopic Analyses of Neutron Capture Elements in Open Clusters
NASA Astrophysics Data System (ADS)
O'Connell, Julia E.
The evolution of elements as a function or age throughout the Milky Way disk provides strong constraints for galaxy evolution models, and on star formation epochs. In an effort to provide such constraints, we conducted an investigation into r- and s-process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 near infrared survey. To obtain data for neutron capture abundance analysis, we conducted a long-term observing campaign spanning three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-meter telescope and Sandiford Cass Echelle Spectrograph (SES, R(lambda/Deltalambda) ˜60,000). The SES provides a wavelength range of ˜1400 A, making it uniquely suited to investigate a number of other important chemical abundances as well as the neutron capture elements. For this study, we derive abundances for 18 elements covering four nucleosynthetic families- light, iron-peak, neutron capture and alpha-elements- for ˜30 open clusters within 6 kpc of the Sun with ages ranging from ˜80 Myr to ˜10 Gyr. Both equivalent width (EW) measurements and spectral synthesis methods were employed to derive abundances for all elements. Initial estimates for model stellar atmospheres- effective temperature and surface gravity- were provided by the APOGEE data set, and then re-derived for our optical spectra by removing abundance trends as a function of excitation potential and reduced width log(EW/lambda). With the exception of Ba II and Zr I, abundance analyses for all neutron capture elements were performed by generating synthetic spectra from the new stellar parameters. In order to remove molecular contamination, or blending from nearby atomic features, the synthetic spectra were modeled by a best-fit Gaussian to the observed data. Nd II shows a slight enhancement in all cluster stars, while other neutron capture elements follow solar abundance trends. Ba II shows a large cluster-to-cluster abundance spread, consistent with other open cluster abundance studies. From log(Age) ˜8.5, this large spread as a function of age appears to replicate the findings from an earlier, much debated study by Orazi et al. (2009) which found a linear trend of decreasing barium abundance with increasing age.
Volatile Element Behavior During Melting and Vaporisation on Earth and Protoplanets.
NASA Astrophysics Data System (ADS)
Wood, B. J.; Norris, C. A.
2017-12-01
During accretion the Earth and many of the smaller bodies which were added to it, underwent periods of partial melting, vaporisation and re-condensation. This resulted in patterns of volatile element depletion relative to CI chondrite which are difficult to interpret. The behavior of moderately volatile elements (Pb, Cd, Zn,Cu, In,Tl etc) during these melting, vaporisation and condensation processes is usually approximated by the temperature of condensation from a gas of solar composition. Thus, Tl and In have low condensation temperatures and are regarded as the most volatile of this group. In order to test this volatility approximation we have studied the vaporisation behavior of 13 elements (Ag,Bi,Cd,Cr,Cu,Ga,Ge,In,Pb,Sb,Sn,Tl,Zn) from molten basalt at 1 atm pressure and oxygen fugacities between Ni-NiO and 2 log units below Fe-FeO. The relative volatilities of the elements turn out to be only weakly correlated with condensation temperature, indicating that the latter is a poor proxy for volatility on molten bodies. Cu, Zn and In for example all have similar volatility in the oxygen fugacity range of concern, despite the condensation temperature of Cu (1037K at 10-4bar) being 500K greater than that of In. The oxygen fugacity dependence of volatility indicates that the volatile species are, for all elements more reduced than the melt species. We addressed the differences between condensation temperature and relative volatility in 2 steps. Firstly we used metal-silicate partitioning experiments to estimate the activity coefficients of the trace element oxides in silicate melts. We then used available thermodynamic data to compute the vapor pressures of the stable species of these 13 elements over the silicate melt at oxygen fugacities ranging from Ni-NiO to about 6 log units below Fe-FeO, which approximates the solar gas. Thus we find that presence of Cl and S in the solar gas and the stable Cl and S species of In,Tl Ga Ge Cd and Sn are important contributing factors to volatility in the solar nebula. Our measured volatilities from silicate melt under reducing (S and Cl-absent) conditions are consistent with abundances in the silicate Earth, indicating that these moderately volatile elements were added to Earth in bodies which had undergone episodes of melting and vaporisation.
Time-resolved characterization of primary emissions from residential wood combustion appliances.
Heringa, M F; DeCarlo, P F; Chirico, R; Lauber, A; Doberer, A; Good, J; Nussbaumer, T; Keller, A; Burtscher, H; Richard, A; Miljevic, B; Prevot, A S H; Baltensperger, U
2012-10-16
Primary emissions from a log wood burner and a pellet boiler were characterized by online measurements of the organic aerosol (OA) using a high-resolution time-of-flight aerosol mass spectrometer (HR-TOF-AMS) and of black carbon (BC). The OA and BC concentrations measured during the burning cycle of the log wood burner, batch wise fueled with wood logs, were highly variable and generally dominated by BC. The emissions of the pellet burner had, besides inorganic material, a high fraction of OA and a minor contribution of BC. However, during artificially induced poor burning BC was the dominating species with ∼80% of the measured mass. The elemental O:C ratio of the OA was generally found in the range of 0.2-0.5 during the startup phase or after reloading of the log wood burner. During the burnout or smoldering phase, O:C ratios increased up to 1.6-1.7, which is similar to the ratios found for the pellet boiler during stable burning conditions and higher than the O:C ratios observed for highly aged ambient OA. The organic emissions of both burners have a very similar H:C ratio at a given O:C ratio and therefore fall on the same line in the Van Krevelen diagram.
2011-01-01
Background HIV-1 is characterized by increased genetic heterogeneity which tends to hinder the reliability of detection and accuracy of HIV-1 RNA quantitation assays. Methods In this study, the Abbott RealTime HIV-1 (Abbott RealTime) assay was compared to the Roche Cobas TaqMan HIV-1 (Cobas TaqMan) and the Siemens Versant HIV-1 RNA 3.0 (bDNA 3.0) assays, using clinical samples of various viral load levels and subtypes from Greece, where the recent epidemiology of HIV-1 infection has been characterized by increasing genetic diversity and a marked increase in subtype A genetic strains among newly diagnosed infections. Results A high correlation was observed between the quantitative results obtained by the Abbott RealTime and the Cobas TaqMan assays. Viral load values quantified by the Abbott RealTime were on average lower than those obtained by the Cobas TaqMan, with a mean (SD) difference of -0.206 (0.298) log10 copies/ml. The mean differences according to HIV-1 subtypes between the two techniques for samples of subtype A, B, and non-A/non-B were 0.089, -0.262, and -0.298 log10 copies/ml, respectively. Overall, differences were less than 0.5 log10 for 85% of the samples, and >1 log10 in only one subtype B sample. Similarly, Abbott RealTime and bDNA 3.0 assays yielded a very good correlation of quantitative results, whereas viral load values assessed by the Abbott RealTime were on average higher (mean (SD) difference: 0.160 (0.287) log10 copies/ml). The mean differences according to HIV-1 subtypes between the two techniques for subtype A, B and non-A/non-B samples were 0.438, 0.105 and 0.191 log10 copies/ml, respectively. Overall, the majority of samples (86%) differed by less than 0.5 log10, while none of the samples showed a deviation of more than 1.0 log10. Conclusions In an area of changing HIV-1 subtype pattern, the Abbott RealTime assay showed a high correlation and good agreement of results when compared both to the Cobas TaqMan and bDNA 3.0 assays, for all HIV-1 subtypes tested. All three assays could determine viral load from samples of different HIV-1 subtypes adequately. However, assay variation should be taken into account when viral load monitoring of the same individual is assessed by different systems. PMID:21219667
Improved Yttrium and Zirconium Abundances in Metal-Poor Stars
NASA Astrophysics Data System (ADS)
Violante, Renata; Biemont, E.; Cowan, J. J.; Sneden, C.
2012-01-01
Abstract We present new abundances of the lighter n-capture elements, Yttrium (Z=39) and Zirconium (Z=40) in the very metal poor, r-process rich stars BD+17 3248 and HD 221170. Very accurate abundances were obtained by use of the new transition probabilities for Y II published by Biémont et al. 2011, and Zr II by Malcheva et al. 2006, and by expanding the number of transitions employed for each element. For example, in BD+17 3248, we find log ɛπσιλον=-0.03 +/- 0.03 (σιγμα=0.15, from 23 lines) for Y II. As for Zr II, log ɛπσιλον = 0.65 +/- 0.03 (σɛγμα = 0.1, from 13 lines). The resulting abundance ratio is log ɛπσιλον [Y/Zr] = -0.68 +/- 0.05. The results for HD 221170 are in accord with those of BD+17 3248. The quantity of lines used to form the abundance means has increased significantly since the original studies of these stars, resulting in more trustworthy abundances. These observed abundance ratios are in agreement with an r-process-only value predicted from stellar models, but is under-abundant compared to an empirical model derived from direct analyses of meteoritic material. This ambiguity should stimulate further nucleosynthetic analysis to explain this abundance ratio. We would like to extend our gratitude to NSF grant AST-0908978 and the University of Texas Astronomy Department Rex G. Baker, Jr. Endowment for their financial support in this project.
Rajasekaran, Sanguthevar
2013-01-01
Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is Ω(log(n)log(log(n))) (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing Θ(log(n)log(log(n))) unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio (α:β), with high probability, using Θ(α + β) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85–94, 2009)—which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling. This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP ’09, Springer-Verlag, pp 235–253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log(n)) and is optimal on the probabilistic tile assembly model (PTAM)—n being an upper bound on the dimensions of a rectangle. PMID:24311993
Abundance and Morphological Effects of Large Woody Debris in Forested Basins of Southern Andes
NASA Astrophysics Data System (ADS)
Andreoli, A.; Comiti, F.; Lenzi, M. A.
2006-12-01
The Southern Andes mountain range represents an ideal location for studying large woody debris (LWD) in streams draining forested basins thanks to the presence of both pristine and managed woodland, and to the general low level of human alteration of stream corridors. However, no published investigations have been performed so far in such a large region. The investigated sites of this research are three basins (9-13 km2 drainage area, third-order channels) covered by Nothofagus forests: two of them are located in the Southern Chilean Andes (the Tres Arroyos in the Malalcahuello National Reserve and the Rio Toro within the Malleco Natural Reserve) and one basin lies in the Argentinean Tierra del Fuego (the Buena Esperanza basin, near the city of Ushuaia). Measured LWD were all wood pieces larger than 10 cm in diameter and 1 m in length, both in the active channel and in the adjacent active floodplain. Pieces forming log jams were all measured and the geometrical dimensions of jams were taken. Jam type was defined based on Abbe and Montgomery (2003) classification. Sediment stored behind log-steps and valley jams was evaluated approximating the sediment accumulated to a solid wedge whose geometrical dimensions were measured. Additional information relative to each LWD piece were recorded during the field survey: type (log, rootwad, log with rootwads attached), orientation to flow, origin (floated, bank erosion, landslide, natural mortality, harvest residuals) and position (log-step, in-channel, channel-bridging, channel margins, bankfull edge). In the Tres Arroyos, the average LWD volume stored within the bankfull channel is 710 m3 ha-1. The average number of pieces is 1,004 per hectare of bankfull channel area. Log-steps represent about 22% of all steps, whereas the elevation loss due to LWD (log-steps and valley jams) results in 27% loss of the total stream potential energy. About 1,600 m3 of sediment (assuming a porosity of 20%) is stored in the main channel behind LWD structures approximately, i.e. 1,000 m3 per km of channel length, corresponding to approximately 150% of the annual sediment yield. In the Rio Toro, the average LWD volume and number of elements stored are much less, respectively 117 m3 ha-1 and 215 pieces ha-1. Neither log-steps or valley jams were observed and the longitudinal profile appear not affected by LWD, and no sediment storage can be attributed to woody debris. The low LWD storage and impact in this channel is likely due to the general stability of its hillslopes, in contrast to the Tres Arroyos where extensive landslides and debris flows convey a great deal of wood into the stream. Finally, in the Buena Esperanza, the average LWD volume stored in the active channel is quite low (120 m3 ha-1, but the average number of pieces is the highest with 1,397 pieces ha-1. This is due to the smaller dimensions of LWD elements delivered by trees growing in a colder climate as that characterizing the Tierra del Fuego. The morphological influence of wood in this channel is however very important, with the presence of large valley jams and high log-steps imparting the channel a macro-scale stepped profile with a total energy dissipation due to LWD (log-steps and valley jams) of about 24 % of the stream potential energy. The sediment stored behind log-steps and valley jams results to be about 1,290 m3, i.e. 700 m3 km-1, but unfortunately no values of sediment yields are available for this basin.
Yearly report, Yucca Mountain project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brune, J.N.
1992-09-30
We proposed to (1) Develop our data logging and analysis equipment and techniques for analyzing seismic data from the Southern Great Basin Seismic Network (SGBSN), (2) Investigate the SGBSN data for evidence of seismicity patterns, depth distribution patterns, and correlations with geologic features (3) Repair and maintain our three broad band downhole digital seismograph stations at Nelson, nevada, Troy Canyon, Nevada, and Deep Springs, California (4) Install, operate, and log data from a super sensitive microearthquake array at Yucca Mountain (5) Analyze data from micro-earthquakes relative to seismic hazard at Yucca Mountain.
Simpao, Allan; Heitz, James W; McNulty, Stephen E; Chekemian, Beth; Brenn, B Randall; Epstein, Richard H
2011-02-01
Residents in anesthesia training programs throughout the world are required to document their clinical cases to help ensure that they receive adequate training. Current systems involve self-reporting, are subject to delayed updates and misreported data, and do not provide a practicable method of validation. Anesthesia information management systems (AIMS) are being used increasingly in training programs and are a logical source for verifiable documentation. We hypothesized that case logs generated automatically from an AIMS would be sufficiently accurate to replace the current manual process. We based our analysis on the data reporting requirements of the American College of Graduate Medical Education (ACGME). We conducted a systematic review of ACGME requirements and our AIMS record, and made modifications after identifying data element and attribution issues. We studied 2 methods (parsing of free text procedure descriptions and CPT4 procedure code mapping) to automatically determine ACGME case categories and generated AIMS-based case logs and compared these to assignments made by manual inspection of the anesthesia records. We also assessed under- and overreporting of cases entered manually by our residents into the ACGME website. The parsing and mapping methods assigned cases to a majority of the ACGME categories with accuracies of 95% and 97%, respectively, as compared with determinations made by 2 residents and 1 attending who manually reviewed all procedure descriptions. Comparison of AIMS-based case logs with reports from the ACGME Resident Case Log System website showed that >50% of residents either underreported or overreported their total case counts by at least 5%. The AIMS database is a source of contemporaneous documentation of resident experience that can be queried to generate valid, verifiable case logs. The extent of AIMS adoption by academic anesthesia departments should encourage accreditation organizations to support uploading of AIMS-based case log files to improve accuracy and to decrease the clerical burden on anesthesia residents.
NASA Astrophysics Data System (ADS)
Zou, C.; Zhao, J.; Zhang, X.; Peng, C.; Zhang, S.
2017-12-01
Continental Scientific Drilling Project of Songliao Basin is a drilling project under the framework of ICDP. It aims at detecting Cretaceous environmental/climate changes and exploring potential resources near or beneath the base of the basin. The main hole, SK-2 East Borehole, has been drilled to penetrate through the Cretaceous formation. A variety of geophysical log data were collected from the borehole, which provide a great opportunity to analyze thermal properties of in-situ rock surrounding the borehole.The geothermal gradients were derived directly from temperature logs recorded 41 days after shut-in. The matrix and bulk thermal conductivity of rock were calculated with the geometric-mean model, in which mineral/rock contents and porosity were required as inputs (Fuchs et. al., 2014). Accurate mineral contents were available from the elemental capture spectroscopy logs and porosity data were derived from conventional logs (density, neutron and sonic). The heat production data were calculated by means of the concentrations of uranium, thorium and potassium determined from natural gamma-ray spectroscopy logs. Then, the heat flow was determined by using the values of geothermal gradients and thermal conductivity.The thermal parameters of in-situ rock over the depth interval of 0 4500m in the borehole were derived from geophysical logs. Statistically, the numerical ranges of thermal parameters are in good agreement with the measured values from both laboratory and field in this area. The results show that high geothermal gradient and heat flow exist over the whole Cretaceous formation, with anomalously high values in the Qingshankou formation (1372.0 1671.7m) and the Quantou formation (1671.7 2533.5m). It is meaningful for characterization of geothermal regime and exploration of geothermal resources in the basin. Acknowledgment: This work was supported by the "China Continental Scientific Drilling Program of Cretaceous Songliao Basin (CCSD-SK)" of China Geological Survey Projects (NO. 12120113017600).
Schlottmann, Jamie L.; Funkhouser, Ron A.
1991-01-01
Chemical analyses of water from eight test holes and geophysical logs for nine test holes drilled in the Central Oklahoma aquifer are presented. The test holes were drilled to investigate local occurrences of potentially toxic, naturally occurring trace substances in ground water. These trace substances include arsenic, chromium, selenium, residual alpha-particle activities, and uranium. Eight of the nine test holes were drilled near wells known to contain large concentrations of one or more of the naturally occurring trace substances. One test hole was drilled in an area known to have only small concentrations of any of the naturally occurring trace substances.Water samples were collected from one to eight individual sandstone layers within each test hole. A total of 28 water samples, including four duplicate samples, were collected. The temperature, pH, specific conductance, alkalinity, and dissolved-oxygen concentrations were measured at the sample site. Laboratory determinations included major ions, nutrients, dissolved organic carbon, and trace elements (aluminum, arsenic, barium, beryllium, boron, cadmium, chromium, hexavalent chromium, cobalt, copper, iron, lead, lithium, manganese, mercury, molybdenum, nickel, selenium, silver, strontium, vanadium and zinc). Radionuclide activities and stable isotope (5 values also were determined, including: gross-alpha-particle activity, gross-beta-particle activity, radium-226, radium-228, radon-222, uranium-234, uranium-235, uranium-238, total uranium, carbon-13/carbon-12, deuterium/hydrogen-1, oxygen-18/oxygen-16, and sulfur-34/sulfur-32. Additional analyses of arsenic and selenium species are presented for selected samples as well as analyses of density and iodine for two samples, tritium for three samples, and carbon-14 for one sample.Geophysical logs for most test holes include caliper, neutron, gamma-gamma, natural-gamma logs, spontaneous potential, long- and short-normal resistivity, and single-point resistance. Logs for test-hole NOTS 7 do not include long- and short-normal resistivity, spontaneous-potential, or single-point resistivity. Logs for test-hole NOTS 7A include only caliper and natural-gamma logs.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Direct push driven in situ color logging tool (CLT): technique, analysis routines, and application
NASA Astrophysics Data System (ADS)
Werban, U.; Hausmann, J.; Dietrich, P.; Vienken, T.
2014-12-01
Direct push technologies have recently seen a broad development providing several tools for in situ parameterization of unconsolidated sediments. One of these techniques is the measurement of soil colors - a proxy information that reveals to soil/sediment properties. We introduce the direct push driven color logging tool (CLT) for real-time and depth-resolved investigation of soil colors within the visible spectrum. Until now, no routines exist on how to handle high-resolved (mm-scale) soil color data. To develop such a routine, we transform raw data (CIEXYZ) into soil color surrogates of selected color spaces (CIExyY, CIEL*a*b*, CIEL*c*h*, sRGB) and denoise small-scale natural variability by Haar and Daublet4 wavelet transformation, gathering interpretable color logs over depth. However, interpreting color log data as a single application remains challenging. Additional information, such as site-specific knowledge of the geological setting, is required to correlate soil color data to specific layers properties. Hence, we exemplary provide results from a joint interpretation of in situ-obtained soil color data and 'state-of-the-art' direct push based profiling tool data and discuss the benefit of additional data. The developed routine is capable of transferring the provided information obtained as colorimetric data into interpretable color surrogates. Soil color data proved to correlate with small-scale lithological/chemical changes (e.g., grain size, oxidative and reductive conditions), especially when combined with additional direct push vertical high resolution data (e.g., cone penetration testing and soil sampling). Thus, the technique allows enhanced profiling by means of providing another reproducible high-resolution parameter for analysis subsurface conditions. This opens potential new areas of application and new outputs for such data in site investigation. It is our intention to improve color measurements by means method of application and data interpretation, useful to characterize vadose layer/soil/sediment characteristics.
VizieR Online Data Catalog: Double stars with wide separations in the AGK3 (Halbwachs+, 2016)
NASA Astrophysics Data System (ADS)
Halbwachs, J. L.; Mayor, M.; Udry, S.
2016-10-01
A large list of common proper motion stars selected from the third Astronomischen Gesellschaft Katalog (AGK3) was monitored with the CORAVEL (for COrrelation RAdial VELocities) spectrovelocimeter, in order to prepare a sample of physical binaries with very wide separations. In paper I,66 stars received special attention, since their radial velocities (RV) seemed to be variable. These stars were monitored over several years in order to derive the elements of their spectroscopic orbits. In addition, 10 of them received accurate RV measurements from the SOPHIE spectrograph of the T193 telescope at the Observatory of Haute-Provence. For deriving the orbital elements of double-lined spectroscopic binaries (SB2s), a new method was applied, which assumed that the RV of blended measurements are linear combinations of the RV of the components. 13 SB2 orbits were thus calculated. The orbital elements were eventually obtained for 52 spectroscopic binaries (SBs), two of them making a triple system. 40 SBs received their first orbit and the orbital elements were improved for 10 others. In addition, 11 SBs were discovered with very long periods for which the orbital parameters were not found. It appeared that HD 153252 has a close companion, which is a candidate brown dwarf with a minimum mass of 50 Jupiter masses. In paper II, 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. Adding CPM stars with separations close enough to be almost certain they are physical, a "bias-controlled" sample of 116 wide binaries was obtained, and used to derive the distribution of separations from 100 to 30,000 au. The distribution obtained doesn't match the log-constant distribution, but is in agreement with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical informations about the multiple systems. The close binaries in WBs seem to be similar to those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems is in agreement with the "no correlation" hypothesis; this indicates that an environment conducive to the formation of WBs doesn't favor the formation of subsystems with periods shorter than 10 years. (9 data files).
Interpretation of well logs in a carbonate aquifer
MacCary, L.M.
1978-01-01
This report describes the log analysis of the Randolph and Sabial core holes in the Edwards aquifer in Texas, with particular attention to the principles that can be applied generally to any carbonate system. The geologic and hydrologic data were obtained during the drilling of the two holes, from extensive laboratory analysis of the cores, and from numerous geophysical logs run in the two holes. Some logging methods are inherently superiors to others for the analysis of limestone and dolomite aquifers. Three such systems are the dentistry, neutron, and acoustic-velocity (sonic) logs. Most of the log analysis described here is based on the interpretation of suites of logs from these three systems. In certain instances, deeply focused resistivity logs can be used to good advantage in carbonate rock studies; this technique is used to computer the water resistivity in the Randolph core hole. The rocks penetrated by the Randolph core hole are typical of those carbonates that have undergone very little solution by recent ground-water circulation. There are few large solutional openings; the water is saline; and the rocks are dark, dolomitic, have pore space that is interparticle or intercrystalline, and contain unoxidized organic material. The total porosity of rocks in the saline zone is higher than that of rocks in the fresh-water aquifer; however, the intrinsic permeability is much less in the saline zone because there are fewer large solutional openings. The Sabinal core hole penetrates a carbonate environment that has experienced much solution by ground water during recent geologic time. The rocks have high secondary porosities controlled by sedimentary structures within the rock; the water is fresh; and the dominant rock composition is limestone. The relative percentages of limestone and dolomite, the average matrix (grain) densities of the rock mixtures , and the porosity of the rock mass can be calculated from density, neutron, and acoustic logs. With supporting data from resistivity logs, the formation water quality can be estimated, as well as the relative cementation or tortuosity of the rock. Many of these properties calculated from logs can be verified by analysis of the core available from test holes drilled in the saline and fresh water zones.
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.
NASA Astrophysics Data System (ADS)
Popov, Evgeny; Popov, Yury; Spasennykh, Mikhail; Kozlova, Elena; Chekhonin, Evgeny; Zagranovskaya, Dzhuliya; Belenkaya, Irina; Alekseev, Aleksey
2016-04-01
A practical method of organic-rich intervals identifying within the low-permeable dispersive rocks based on thermal conductivity measurements along the core is presented. Non-destructive non-contact thermal core logging was performed with optical scanning technique on 4 685 full size core samples from 7 wells drilled in four low-permeable zones of the Bazhen formation (B.fm.) in the Western Siberia (Russia). The method employs continuous simultaneous measurements of rock anisotropy, volumetric heat capacity, thermal anisotropy coefficient and thermal heterogeneity factor along the cores allowing the high vertical resolution (of up to 1-2 mm). B.fm. rock matrix thermal conductivity was observed to be essentially stable within the range of 2.5-2.7 W/(m*K). However, stable matrix thermal conductivity along with the high thermal anisotropy coefficient is characteristic for B.fm. sediments due to the low rock porosity values. It is shown experimentally that thermal parameters measured relate linearly to organic richness rather than to porosity coefficient deviations. Thus, a new technique employing the transformation of the thermal conductivity profiles into continuous profiles of total organic carbon (TOC) values along the core was developed. Comparison of TOC values, estimated from the thermal conductivity values, with experimental pyrolytic TOC estimations of 665 samples from the cores using the Rock-Eval and HAWK instruments demonstrated high efficiency of the new technique for the organic rich intervals separation. The data obtained with the new technique are essential for the SR hydrocarbon generation potential, for basin and petroleum system modeling application, and estimation of hydrocarbon reserves. The method allows for the TOC richness to be accurately assessed using the thermal well logs. The research work was done with financial support of the Russian Ministry of Education and Science (unique identification number RFMEFI58114X0008).
Review of hydraulic fracture mapping using advanced accelerometer-based receiver systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.; Uhl, J.E.; Engler, B.P.
Hydraulic fracturing is an important tool for natural gas and oil exploitation, but its optimization has been impeded by an inability to observe how the fracture propagates and what its overall dimensions are. The few experiments in which fractures have been exposed through coring or mineback have shown that hydraulic fractures are complicated multi-stranded structures that may behave much differently than currently predicted by models. It is clear that model validation, fracture optimization, problem identification and solution, and field development have all been encumbered by the absence of any ground truth information on fracture behavior in field applications. The solutionmore » to this problem is to develop techniques to image the hydraulic fracture in situ from either the surface, the treatment well, or offset wells. Several diagnostic techniques have been available to assess individual elements of the fracture geometry, but most of these techniques have limitations on their usefulness. For example, tracers and temperature logs can only measure fracture height at the wellbore, well testing and production history matching provide a productive length which may or may not be different from the true fracture length, and tiltmeters can provide accurate information on azimuth and type of fracture (horizontal or vertical), but length and height can only be extracted from a non-unique inversion of the data. However, there is a method, the microseismic technique, which possesses the potential for imaging the entire hydraulic fracture and, more importantly, its growth history. This paper discusses application of advanced technology to the microseismic method in order to provide detailed accurate images of fractures and their growth processes.« less
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Robert A. Gitzen; Stephen West; Chris C. Maguireb; Tom Manning; Charles B. Halpern
2007-01-01
To sustain native species in managed forests, landowners need silvicultural strategies that retain habitat elements often eliminated during traditional harvests such as clearcut logging. One alternative is green-tree or variable retention. We investigated the response of terrestrial small mammals to experimental harvests that retained large live trees in varying...
Logistics Force Planner Assistant (Log Planner)
1989-09-01
elements. The system is implemented on a MS-DOS based microcomputer, using the "Knowledge Pro’ software tool., 20 DISTRIBUTION/AVAILABILITY OF... service support structure. 3. A microcomputer-based knowledge system was developed and successfully demonstrated. Four modules of information are...combat service support (CSS) units planning process to Army Staff logistics planners. Personnel newly assigned to logistics planning need an
Characterization of airborne particles in an open pit mining region.
Huertas, José I; Huertas, María E; Solís, Dora A
2012-04-15
We characterized airborne particle samples collected from 15 stations in operation since 2007 in one of the world's largest opencast coal mining regions. Using gravimetric, scanning electron microscopy (SEM-EDS), and X-ray photoelectron spectroscopy (XPS) analysis the samples were characterized in terms of concentration, morphology, particle size distribution (PSD), and elemental composition. All of the total suspended particulate (TSP) samples exhibited a log-normal PSD with a mean of d=5.46 ± 0.32 μm and σ(ln d)=0.61 ± 0.03. Similarly, all particles with an equivalent aerodynamic diameter less than 10 μm (PM(10)) exhibited a log-normal type distribution with a mean of d=3.6 ± 0.38 μm and σ(ln d)=0.55 ± 0.03. XPS analysis indicated that the main elements present in the particles were carbon, oxygen, potassium, and silicon with average mass concentrations of 41.5%, 34.7%, 11.6%, and 5.7% respectively. In SEM micrographs the particles appeared smooth-surfaced and irregular in shape, and tended to agglomerate. The particles were typically clay minerals, including limestone, calcite, quartz, and potassium feldspar. Copyright © 2012 Elsevier B.V. All rights reserved.
Virucidal Influence of Ionic Liquids on Phages P100 and MS2
Fister, Susanne; Mester, Patrick; Sommer, Julia; Witte, Anna K.; Kalb, Roland; Wagner, Martin; Rossmanith, Peter
2017-01-01
An increasing number of publications describe the potential of ionic liquids (ILs) as novel antimicrobials, antibacterial coatings and even as active pharmaceutical ingredients. Nevertheless, a major research area, notably their impact on viruses, has so far been neglected. Consequently the aim of this study was to examine the effects of ILs on the infectivity of viruses. A systematic analysis to investigate the effects of defined structural elements of ILs on virus activity was performed using 55 ILs. All structure activity relationships (SARs) were tested on the human norovirus surrogate phage MS2 and phage P100 representing non-enveloped DNA viruses. Results demonstrate that IL SAR conclusions, established for prokaryotes and eukaryotes, are not readily applicable to the examined phages. A virus-type-dependent IL influence was also apparent. Overall, four ILs, covering different structural elements, were found to reduce phage P100 infectivity by ≥4 log10 units, indicating a virucidal effect, whereas the highest reduction for phage MS2 was about 3 log10 units. Results indicate that future applications of ILs as virucidal agents will require development of novel SARs and the obtained results serve as a good starting point for future studies. PMID:28883814
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2017-07-01
A large eddy simulation (LES) methodology coupled with near-wall modeling has been implemented in the current study for high Re neutral atmospheric boundary layer flows using an exponentially accurate spectral element method in an open-source research code Nek 5000. The effect of artificial length scales due to subgrid scale (SGS) and near wall modeling (NWM) on the scaling laws and structure of the inner and outer layer eddies is studied using varying SGS and NWM parameters in the spectral element framework. The study provides an understanding of the various length scales and dynamics of the eddies affected by the LES model and also the fundamental physics behind the inner and outer layer eddies which are responsible for the correct behavior of the mean statistics in accordance with the definition of equilibrium layers by Townsend. An economical and accurate LES model based on capturing the near wall coherent eddies has been designed, which is successful in eliminating the artificial length scale effects like the log-layer mismatch or the secondary peak generation in the streamwise variance.
Carty, Neal; Wibaux, Anne; Ward, Colleen; Paulson, Daryl S.; Johnson, Peter
2014-01-01
Objectives To evaluate the antimicrobial activity of a new, transparent composite film dressing, whose adhesive contains chlorhexidine gluconate (CHG), against the native microflora present on human skin. Methods CHG-containing adhesive film dressings and non-antimicrobial control film dressings were applied to the skin on the backs of healthy human volunteers without antiseptic preparation. Dressings were removed 1, 4 or 7 days after application. The bacterial populations underneath were measured by quantitative cultures (cylinder-scrub technique) and compared with one another as a function of time. Results The mean baseline microflora recovery was 3.24 log10 cfu/cm2. The mean log reductions from baseline measured from underneath the CHG-containing dressings were 0.87, 0.78 and 1.30 log10 cfu/cm2 on days 1, 4 and 7, respectively, compared with log reductions of 0.67, −0.87 and −1.29 log10 cfu/cm2 from underneath the control film dressings. There was no significant difference between the log reductions of the two treatments on day 1, but on days 4 and 7 the log reduction associated with the CHG adhesive was significantly higher than that associated with the control adhesive. Conclusions The adhesive containing CHG was associated with a sustained antimicrobial effect that was not present in the control. Incorporating the antimicrobial into the adhesive layer confers upon it bactericidal properties in marked contrast to the non-antimicrobial adhesive, which contributed to bacterial proliferation when the wear time was ≥4 days. PMID:24722839
Carty, Neal; Wibaux, Anne; Ward, Colleen; Paulson, Daryl S; Johnson, Peter
2014-08-01
To evaluate the antimicrobial activity of a new, transparent composite film dressing, whose adhesive contains chlorhexidine gluconate (CHG), against the native microflora present on human skin. CHG-containing adhesive film dressings and non-antimicrobial control film dressings were applied to the skin on the backs of healthy human volunteers without antiseptic preparation. Dressings were removed 1, 4 or 7 days after application. The bacterial populations underneath were measured by quantitative cultures (cylinder-scrub technique) and compared with one another as a function of time. The mean baseline microflora recovery was 3.24 log10 cfu/cm(2). The mean log reductions from baseline measured from underneath the CHG-containing dressings were 0.87, 0.78 and 1.30 log10 cfu/cm(2) on days 1, 4 and 7, respectively, compared with log reductions of 0.67, -0.87 and -1.29 log10 cfu/cm(2) from underneath the control film dressings. There was no significant difference between the log reductions of the two treatments on day 1, but on days 4 and 7 the log reduction associated with the CHG adhesive was significantly higher than that associated with the control adhesive. The adhesive containing CHG was associated with a sustained antimicrobial effect that was not present in the control. Incorporating the antimicrobial into the adhesive layer confers upon it bactericidal properties in marked contrast to the non-antimicrobial adhesive, which contributed to bacterial proliferation when the wear time was ≥4 days. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy.
NASA Technical Reports Server (NTRS)
Parsons, A.; Bodnarik, J.; Burger, D.; Evans, L.; Floyd, S; Lim, L.; McClanahan, T.; Namkung, M.; Nowicki, S.; Schweitzer, J.;
2011-01-01
The Probing In situ with Neutrons and Gamma rays (PING) instrument is a promising planetary science application of the active neutron-gamma ray technology that has been used successfully in oil field well logging and mineral exploration on Earth for decades. Similar techniques can be very powerful for non-invasive in situ measurements of the subsurface elemental composition on other planets. The objective of our active neutron-gamma ray technology program at NASA Goddard Space Flight Center (NASA/GSFC) is to bring instruments using this technology to the point where they can be flown on a variety of surface lander or rover missions to the Moon, Mars, Venus, asteroids, comets and the satellites of the outer planets. PING combines a 14 MeV deuterium-tritium pulsed neutron generator with a gamma ray spectrometer and two neutron detectors to produce a landed instrument that can determine the elemental composition of a planet down to 30 - 50 cm below the planet's surface. The penetrating nature of.5 - 10 MeV gamma rays and 14 MeV neutrons allows such sub-surface composition measurements to be made without the need to drill into or otherwise disturb the planetary surface, thus greatly simplifying the lander design. We are currently testing a PING prototype at a unique outdoor neutron instrumentation test facility at NASA/GSFC that provides two large (1.8 m x 1.8 m x.9 m) granite and basalt test formations placed outdoors in an empty field. Since an independent trace elemental analysis has been performed on both the Columbia River basalt and Concord Gray granite materials, these samples present two known standards with which to compare PING's experimentally measured elemental composition results. We will present experimental results from PING measurements of both the granite and basalt test formations and show how and why the optimum PING instrument operating parameters differ for studying the two materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swift, T.E.; Marlow, R.E.; Wilhelm, M.H.
1981-11-01
This report describes part of the work done to fulfill a contract awarded to Gruy Federal, Inc., by the Department of Energy (DOE) on Feburary 12, 1979. The work includes pressure-coring and associated logging and testing programs to provide data on in-situ oil saturation, porosity and permeability distribution, and other data needed for resource characterization of fields and reservoirs in which CO/sub 2/ injection might have a high probability of success. This report details the second such project. Core porosities agreed well with computed log porosities. Core water saturation and computed log porosities agree fairly well from 3692 to 3712more » feet, poorly from 3712 to 3820 feet and in a general way from 4035 to 4107 feet. Computer log analysis techniques incorporating the a, m, and n values obtained from Core Laboratories analysis did not improve the agreement of log versus core derived water saturations. However, both core and log analysis indicated the ninth zone had the highest residual hydrocarbon saturations and production data confirmed the validity of oil saturation determinations. Residual oil saturation, for the perforated and tested intervals were 259 STB/acre-ft for the interval from 4035 to 4055 feet, and 150 STB/acre-ft for the interval from 3692 to 3718 feet. Nine BOPD was produced from the interval 4035 to 4055 feet and no oil was produced from interval 3692 to 3718 feet, qualitatively confirming the relative oil saturations as calculated. The low oil production in the zone from 4022 to 4055 and the lack of production from 3692 to 3718 feet indicated the zone to be at or near residual waterflood conditions as determined by log analysis. This project demonstrates the usefulness of integrating pressure core, log, and production data to realistically evaluate a reservoir for carbon dioxide flood.« less
NASA Technical Reports Server (NTRS)
Scott, David W.; Underwood, Debrah (Technical Monitor)
2002-01-01
At the Marshall Space Flight Center's (MSFC) Payload Operations Integration Center (POIC) for International Space Station (ISS), each flight controller maintains detailed logs of activities and communications at their console position. These logs are critical for accurately controlling flight in real-time as well as providing a historical record and troubleshooting tool. This paper describes logging methods and electronic formats used at the POIC and provides food for thought on their strengths and limitations, plus proposes some innovative extensions. It also describes an inexpensive PC-based scheme for capturing and/or transcribing audio clips from communications consoles. Flight control activity (e.g. interpreting computer displays, entering data/issuing electronic commands, and communicating with others) can become extremely intense. It's essential to document it well, but the effort to do so may conflict with actual activity. This can be more than just annoying, as what's in the logs (or just as importantly not in them) often feeds back directly into the quality of future operations, whether short-term or long-term. In earlier programs, such as Spacelab, log keeping was done on paper, often using position-specific shorthand, and the other reader was at the mercy of the writer's penmanship. Today, user-friendly software solves the legibility problem and can automate date/time entry, but some content may take longer to finish due to individual typing speed and less use of symbols. File layout can be used to great advantage in making types of information easy to find, and creating searchable master logs for a given position is very easy and a real lifesaver in reconstructing events or researching a given topic. We'll examine log formats from several console position, and the types of information that are included and (just as importantly) excluded. We'll also look at when a summary or synopsis is effective, and when extensive detail is needed.
Carroll, R.D.; Lacomb, J.W.
1993-01-01
The location of the subsurface top of the chimney formed by the collapse of the cavity resulting from an underground nuclear explosion is examined at five sites at the Nevada Test Site. The chimneys were investigated by drilling, coring, geophysical logging (density, gamma-ray, caliper), and seismic velocity surveys. The identification of the top of the chimney can be complicated by chimney termination in friable volcanic rock of relatively high porosity. The presence of an apical void in three of the five cases is confirmed as the chimney horizon by coincidence with anomalies observed in coring, caliper and gamma-ray logging (two cases), seismic velocity, and drilling. In the two cases where an apical void is not present, several of these techniques yield anomalies at identical horizons, however, the exact depth of chimney penetration is subject to some degree of uncertainty. This is due chiefly to the extent to which core recovery and seismic velocity may be affected by perturbations in the tuff above the chimney due to the explosion and collapse. The data suggest, however, that the depth uncertainty may be only of the order of 10 m if several indicators are available. Of all indicators, core recovery and seismic velocity indicate anomalous horizons in every case. Because radiation products associated with the explosion are contained within the immediate vicinity of the cavity, gamma-ray logs are generally not diagnostic of chimney penetration. In no case is the denisty log indicative of the presence of the chimney. ?? 1993.
Sheppard, S C; Long, J M; Sanipelli, B
2010-12-01
In the effort to predict the risks associated with contaminated soils, considerable reliance is placed on plant/soil concentration ratio (CR) values measured at sites other than the contaminated site. This inevitably results in the need to extrapolate among the many soil and plant types. There are few studies that compare CR among plant types that encompass both field and garden crops. Here, CRs for 40 elements were measured for 25 crops from farm and garden sites chosen so the grain crops were in close proximity to the gardens. Special emphasis was placed on iodine (I) because data for this element are sparse. For many elements, there were consistent trends among CRs for the various crop types, with leafy crops > root crops ≥ fruit crops ≈ seed crops. Exceptions included CR values for As, K, Se and Zn which were highest in the seed crops. The correlation of CRs from one plant type to another was evident only when there was a wide range in soil concentrations. In comparing CRs between crop types, it became apparent that the relationships differed for the rare earth elements (REE), which also had very low CR values. The CRs for root and leafy crops of REE converged to a minimum value. This was attributed to soil adhesion, despite the samples being washed, and the average soil adhesion for root crops was 500 mg soil kg⁻¹ dry plant and for leafy crops was 5 g kg⁻¹. Across elements, the log CR was negatively correlated with log Kd (the soil solid/liquid partition coefficient), as expected. Although, this correlation is expected, measures of correlation coefficients suitable for stochastic risk assessment are not frequently reported. The results suggest that r ≈ -0.7 would be appropriate for risk assessment. Copyright © 2010 Elsevier Ltd. All rights reserved.
BD-22deg3467, a DAO-type Star Exciting the Nebula Abell 35
NASA Technical Reports Server (NTRS)
Ziegler, M.; Rauch, T.; Werner, K.; Koppen, J.; Kruk, J. W.
2013-01-01
Spectral analyses of hot, compact stars with non-local thermodynamical equilibrium (NLTE) model-atmosphere techniques allow the precise determination of photospheric parameters such as the effective temperature (T(sub eff)), the surface gravity (log g), and the chemical composition. The derived photospheric metal abundances are crucial constraints for stellar evolutionary theory. Aims. Previous spectral analyses of the exciting star of the nebula A35, BD-22deg3467, were based on He+C+N+O+Si+Fe models only. For our analysis, we use state-of-the-art fully metal-line blanketed NLTE model atmospheres that consider opacities of 23 elements from hydrogen to nickel. We aim to identify all observed lines in the ultraviolet (UV) spectrum of BD-22deg3467 and to determine the abundances of the respective species precisely. Methods. For the analysis of high-resolution and high signal-to-noise ratio (S/N) far-ultraviolet (FUSE) and UV (HST/STIS) observations, we combined stellar-atmosphere models and interstellar line-absorption models to fully reproduce the entire observed UV spectrum. Results. The best agreement with the UV observation of BD-22deg3467 is achieved at T(sub eff) = 80 +/- 10 kK and log g = 7.2 +/- 0.3. While T(sub eff) of previous analyses is verified, log g is significantly lower. We re-analyzed lines of silicon and iron (1/100 and about solar abundances, respectively) and for the first time in this star identified argon, chromium, manganese, cobalt, and nickel and determined abundances of 12, 70, 35, 150, and 5 times solar, respectively. Our results partially agree with predictions of diffusion models for DA-type white dwarfs. A combination of photospheric and interstellar line-absorption models reproduces more than 90% of the observed absorption features. The stellar mass is M approx. 0.48 Solar Mass. Conclusions. BD.22.3467 may not have been massive enough to ascend the asymptotic giant branch and may have evolved directly from the extended horizontal branch to the white dwarf state. This would explain why it is not surrounded by a planetary nebula. However, the star, ionizes the ambient interstellar matter, mimicking a planetary nebula.
NASA Astrophysics Data System (ADS)
Ávila-Carrera, R.; Sánchez-Sesma, F. J.; Spurlin, James H.; Valle-Molina, C.; Rodríguez-Castellanos, A.
2014-09-01
An analytic formulation to understand the scattering, diffraction and attenuation of elastic waves at the neighborhood of fluid filled wells is presented. An important, and not widely exploited, technique to carefully investigate the wave propagation in exploration wells is the logging of sonic waveforms. Fundamental decisions and production planning in petroleum reservoirs are made by interpretation of such recordings. Nowadays, geophysicists and engineers face problems related to the acquisition and interpretation under complex conditions associated with conducting open-hole measurements. A crucial problem that directly affects the response of sonic logs is the eccentricity of the measuring tool with respect to the center of the borehole. Even with the employment of centralizers, this simple variation, dramatically changes the physical conditions on the wave propagation around the well. Recent works in the numerical field reported advanced studies in modeling and simulation of acoustic wave propagation around wells, including complex heterogeneities and anisotropy. However, no analytical efforts have been made to formally understand the wireline sonic logging measurements acquired with borehole-eccentered tools. In this paper, the Graf's addition theorem was used to describe monopole sources in terms of solutions of the wave equation. The formulation was developed from the three-dimensional discrete wave-number method in the frequency domain. The cylindrical Bessel functions of the third kind and order zero were re-derived to obtain a simplified set of equations projected into a bi-dimensional plane-space for displacements and stresses. This new and condensed analytic formulation allows the straightforward calculation of all converted modes and their visualization in the time domain via Fourier synthesis. The main aim was to obtain spectral surfaces of transfer functions and synthetic seismograms that might be useful to understand the wave motion produced by the eccentricity of the source and explain in detail the new arising borehole propagation modes. Finally, time histories and amplitude spectra for relevant examples are presented and the validation of time traces using the spectral element method is reported.
Sorption behavior of microamounts of zinc on titanium oxide from aqueous solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasany, S.M.; Ghaffar, A.; Chughtai, F.A.
1991-08-01
To correlate soil response toward zinc, it is necessary to study its adsorption in detail on soils or on their constituents. The adsorption of microamounts of zinc on titanium oxide, prepared and characterized in this laboratory, has been studied in detail. Zinc adsorption has been found to be dependent on the pH of the aqueous solution, amount of oxide, and zinc concentration. Maximum adsorption is from pH 10 buffer. EDTA and cyanide ions inhibit adsorption significantly. The adsorption of other elements under optimal conditions has also been measured on this oxide. Sc(III) and Cs(I) show almost negligible adsorption. Zinc adsorptionmore » follows the linear form of the Freundlich adsorption isotherm: log C{sub Ads} = log A + (1/n) log C{sub Bulk} with A = 0.48 mol/g and n = 1. Except at a very low bulk concentration (3 {times} 10{sup {minus}5} mol/dm{sup 3}), Langmuir adsorption isotherm is also linear for the entire zinc concentration investigated. The limiting adsorbed concentration is estimated to be 0.18 mol/g.« less
NASA Astrophysics Data System (ADS)
Gaci, Said; Hachay, Olga; Zaourar, Naima
2017-04-01
One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.
Viscosities of Fe Ni, Fe Co and Ni Co binary melts
NASA Astrophysics Data System (ADS)
Sato, Yuzuru; Sugisawa, Koji; Aoki, Daisuke; Yamamura, Tsutomu
2005-02-01
Viscosities of three binary molten alloys consisting of the iron group elements, Fe, Ni and Co, have been measured by using an oscillating cup viscometer over the entire composition range from liquidus temperatures up to 1600 °C with high precision and excellent reproducibility. The viscosities measured showed good Arrhenius linearity for all the compositions. The viscosities of Fe, Ni and Co as a function of temperature are as follows: \\eqalign{ & \\log \\eta={-}0.6074 + 2493/T\\qquad for\\quad Fe\\\\ & \\log \\eta={-}0.5695 + 2157/T\\qquad for\\quad Ni \\\\ & \\log \\eta={-}0.6620 + 2430/T\\qquad for\\quad Co.} The isothermal viscosities of Fe-Ni and Fe-Co binary melts increase monotonically with increasing Fe content. On the other hand, in Ni-Co binary melt, the isothermal viscosity decreases slightly and then increases with increasing Co. The activation energy of Fe-Co binary melt increased slightly on mixing, and those of Fe-Ni and Ni-Co melts decreased monotonically with increasing Ni content. The above behaviour is discussed based on the thermodynamic properties of the alloys.
Shell-model method for Gamow-Teller transitions in heavy deformed odd-mass nuclei
NASA Astrophysics Data System (ADS)
Wang, Long-Jun; Sun, Yang; Ghorui, Surja K.
2018-04-01
A shell-model method for calculating Gamow-Teller (GT) transition rates in heavy deformed odd-mass nuclei is presented. The method is developed within the framework of the projected shell model. To implement the computation requirement when many multi-quasiparticle configurations are included in the basis, a numerical advancement based on the Pfaffian formula is introduced. With this new many-body technique, it becomes feasible to perform state-by-state calculations for the GT nuclear matrix elements of β -decay and electron-capture processes, including those at high excitation energies in heavy nuclei which are usually deformed. The first results, β- decays of the well-deformed A =153 neutron-rich nuclei, are shown as the example. The known log(f t ) data corresponding to the B (GT- ) decay rates of the ground state of 153Nd to the low-lying states of 153Pm are well described. It is further shown that the B (GT) distributions can have a strong dependence on the detailed microscopic structure of relevant states of both the parent and daughter nuclei.
Numerical Simulation of Delamination Growth in Composite Materials
NASA Technical Reports Server (NTRS)
Camanho, P. P.; Davila, C. G.; Ambur, D. R.
2001-01-01
The use of decohesion elements for the simulation of delamination in composite materials is reviewed. The test methods available to measure the interfacial fracture toughness used in the formulation of decohesion elements are described initially. After a brief presentation of the virtual crack closure technique, the technique most widely used to simulate delamination growth, the formulation of interfacial decohesion elements is described. Problems related with decohesion element constitutive equations, mixed-mode crack growth, element numerical integration and solution procedures are discussed. Based on these investigations, it is concluded that the use of interfacial decohesion elements is a promising technique that avoids the need for a pre-existing crack and pre-defined crack paths, and that these elements can be used to simulate both delamination onset and growth.
NASA Astrophysics Data System (ADS)
Langner, Andreas; Samejima, Hiromitsu; Ong, Robert C.; Titin, Jupiri; Kitayama, Kanehiro
2012-08-01
Conservation of tropical forests is of outstanding importance for mitigation of climate change effects and preserving biodiversity. In Borneo most of the forests are classified as permanent forest estates and are selectively logged using conventional logging techniques causing high damage to the forest ecosystems. Incorporation of sustainable forest management into climate change mitigation measures such as Reducing Emissions from Deforestation and Forest Degradation (REDD+) can help to avert further forest degradation by synergizing sustainable timber production with the conservation of biodiversity. In order to evaluate the efficiency of such initiatives, monitoring methods for forest degradation and above-ground biomass in tropical forests are urgently needed. In this study we developed an index using Landsat satellite data to describe the crown cover condition of lowland mixed dipterocarp forests. We showed that this index combined with field data can be used to estimate above-ground biomass using a regression model in two permanent forest estates in Sabah, Malaysian Borneo. Tangkulap represented a conventionally logged forest estate while Deramakot has been managed in accordance with sustainable forestry principles. The results revealed that conventional logging techniques used in Tangkulap during 1991 and 2000 decreased the above-ground biomass by an annual amount of average -6.0 t C/ha (-5.2 to -7.0 t C/ha, 95% confidential interval) whereas the biomass in Deramakot increased by 6.1 t C/ha per year (5.3-7.2 t C/ha, 95% confidential interval) between 2000 and 2007 while under sustainable forest management. This indicates that sustainable forest management with reduced-impact logging helps to protect above-ground biomass. In absolute terms, a conservative amount of 10.5 t C/ha per year, as documented using the methodology developed in this study, can be attributed to the different management systems, which will be of interest when implementing REDD+ that rewards the enhancement of carbon stocks.
NASA Astrophysics Data System (ADS)
Liu, Hong; Nodine, Calvin F.
1996-07-01
This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.
Zanbaka, Catherine A; Lok, Benjamin C; Babu, Sabarish V; Ulinski, Amy C; Hodges, Larry F
2005-01-01
We describe a between-subjects experiment that compared four different methods of travel and their effect on cognition and paths taken in an immersive virtual environment (IVE). Participants answered a set of questions based on Crook's condensation of Bloom's taxonomy that assessed their cognition of the IVE with respect to knowledge, understanding and application, and higher mental processes. Participants also drew a sketch map of the IVE and the objects within it. The users' sense of presence was measured using the Steed-Usoh-Slater Presence Questionnaire. The participants' position and head orientation were automatically logged during their exposure to the virtual environment. These logs were later used to create visualizations of the paths taken. Path analysis, such as exploring the overlaid path visualizations and dwell data information, revealed further differences among the travel techniques. Our results suggest that, for applications where problem solving and evaluation of information is important or where opportunity to train is minimal, then having a large tracked space so that the participant can walk around the virtual environment provides benefits over common virtual travel techniques.
Modal parameter identification using the log decrement method and band-pass filters
NASA Astrophysics Data System (ADS)
Liao, Yabin; Wells, Valana
2011-10-01
This paper presents a time-domain technique for identifying modal parameters of test specimens based on the log-decrement method. For lightly damped multidegree-of-freedom or continuous systems, the conventional method is usually restricted to identification of fundamental-mode parameters only. Implementation of band-pass filters makes it possible for the proposed technique to extract modal information of higher modes. The method has been applied to a polymethyl methacrylate (PMMA) beam for complex modulus identification in the frequency range 10-1100 Hz. Results compare well with those obtained using the Least Squares method, and with those previously published in literature. Then the accuracy of the proposed method has been further verified by experiments performed on a QuietSteel specimen with very low damping. The method is simple and fast. It can be used for a quick estimation of the modal parameters, or as a complementary approach for validation purposes.
Parallel compression of data chunks of a shared data object using a log-structured file system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-10-25
Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less
Measurement of stiffness of standing trees and felled logs using acoustics: A review.
Legg, Mathew; Bradley, Stuart
2016-02-01
This paper provides a review on the use of acoustics to measure stiffness of standing trees, stems, and logs. An outline is given of the properties of wood and how these are related to stiffness and acoustic velocity throughout the tree. Factors are described that influence the speed of sound in wood, including the different types of acoustic waves which propagate in tree stems and lumber. Acoustic tools and techniques that have been used to measure the stiffness of wood are reviewed. The reasons for a systematic difference between direct and acoustic measurements of stiffness for standing trees, and methods for correction, are discussed. Other techniques, which have been used in addition to acoustics to try to improve stiffness measurements, are also briefly described. Also reviewed are studies which have used acoustic tools to investigate factors that influence the stiffness of trees. These factors include different silvicultural practices, geographic and environmental conditions, and genetics.
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
USDA-ARS?s Scientific Manuscript database
Two rapid immunomagnetic separation (IMS) protocols were evaluated to recover 1-2 log CFU/g inoculated E. coli O157:H7 from 30 different commercial, finished compost samples. Both protocols detected E. coli O157:H7 in compost samples; PCR techniques required the removal of inhibitors to reduce poss...
Recovery Act Validation of Innovative Exploration Techniques Pilgrim Hot Springs, Alaska
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdmann, Gwen
2015-04-30
Drilling and temperature logging campaigns between the late 1970's and early 1980’s measured temperatures at Pilgrim Hot Springs in excess of 90°C. Between 2010 and 2014 the University of Alaska used a variety of methods including geophysical surveys, remote sensing techniques, heat budget modeling, and additional drilling to better understand the resource and estimate the available geothermal energy.
Application of MIMO Techniques in sky-surface wave hybrid networking sea-state radar system
NASA Astrophysics Data System (ADS)
Zhang, L.; Wu, X.; Yue, X.; Liu, J.; Li, C.
2016-12-01
The sky-surface wave hybrid networking sea-state radar system contains of the sky wave transmission stations at different sites and several surface wave radar stations. The subject comes from the national 863 High-tech Project of China. The hybrid sky-surface wave system and the HF surface wave system work simultaneously and the HF surface wave radar (HFSWR) can work in multi-static and surface-wave networking mode. Compared with the single mode radar system, this system has advantages of better detection performance at the far ranges in ocean dynamics parameters inversion. We have applied multiple-input multiple-output(MIMO) techniques in this sea-state radar system. Based on the multiple channel and non-causal transmit beam-forming techniques, the MIMO radar architecture can reduce the size of the receiving antennas and simplify antenna installation. Besides, by efficiently utilizing the system's available degrees of freedom, it can provide a feasible approach for mitigating multipath effect and Doppler-spread clutter in Over-the-horizon Radar. In this radar, slow-time phase-coded MIMO method is used. The transmitting waveforms are phase-coded in slow-time so as to be orthogonal after Doppler processing at the receiver. So the MIMO method can be easily implemented without the need to modify the receiver hardware. After the radar system design, the MIMO experiments of this system have been completed by Wuhan University during 2015 and 2016. The experiment used Wuhan multi-channel ionospheric sounding system(WMISS) as sky-wave transmitting source and three dual-frequency HFSWR developed by the Oceanography Laboratory of Wuhan University. The transmitter system located at Chongyang with five element linear equi-spaced antenna array and Wuhan with one log-periodic antenna. The RF signals are generated by synchronized, but independent digital waveform generators - providing complete flexibility in element phase and amplitude control, and waveform type and parameters. The field experimental results show the presented method is effective. The echoes are obvious and distinguishable both in co-located MIMO mode and widely distributed MIMO mode. Key words: sky-surface wave hybrid networking; sea-state radar; MIMO; phase-coded
NASA Astrophysics Data System (ADS)
Nyssen, Jan; Gebreslassie, Seifu; Assefa, Romha; Deckers, Jozef; Guyassa, Etefa; Poesen, Jean; Frankl, Amaury
2017-04-01
Many thousands of gabion check dams have been installed to control gully erosion in Ethiopia, but several challenges still remain, such as the issue of gabion failure in ephemeral streams with coarse bed load, that abrades at the chute step. As an alternative for gabion check dams in torrents with coarse bed load, boulder-faced log dams were conceived, installed transversally across torrents and tested (n = 30). For this, logs (22-35 cm across) were embedded in the banks of torrents, 0.5-1 m above the bed and their upstream sides were faced with boulders (0.3-0.7 m across). Similar to gabion check dams, boulder-faced log dams lead to temporary ponding, spreading of peak flow over the entire channel width and sediment deposition. Results of testing under extreme flow conditions (including two storms with return periods of 5.6 and 7 years) show that 18 dams resisted strong floods. Beyond certain flood thresholds, represented by proxies such as Strahler's stream order, catchment area, D95 or channel width), 11 log dams were completely destroyed. Smallholder farmers see much potential in this type of structure to control first-order torrents with coarse bed load, since the technique is cost-effective and can be easily installed.
Eliminating the rugosity effect from compensated density logs by geometrical response matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flaum, C.; Holenka, J.M.; Case, C.R.
1991-06-01
A theoretical and experimental effort to understand the effects of borehole rugosity on individual detector responses yielded an improved method of processing compensated density logs. Historically, the spine/ribs technique for obtaining borehole and mudcake compensation of dual-detector, gamma-gamma density logs has been very successful as long as the borehole and other environmental effects vary slowly with depth and the interest in limited to vertical features broader than several feet. With the increased interest in higher vertical resolution, a more detailed analysis of the effect of such quickly varying environmental effects as rugosity was required. A laboratory setup simulating the effectmore » of rugosity on Schlumberger Litho-Density{sup SM} tools (LDT) was used to study vertical response in the presence of rugosity. The data served as the benchmark for the Nonte Carlo models used to generate synthetic density logs in the presence of more complex rugosity patterns. The results provided in this paper show that proper matching of the two detector responses before application of conventional compensation methods can eliminate rugosity effects without degrading the measurements vertical resolution. The accuracy of the results is a good as the obtained in a parallel mudcake or standoff with the conventional method. Application to both field and synthetic log confirmed the validity of these results.« less
Fluid-Rock Characterization and Interactions in NMR Well Logging
DOE Office of Scientific and Technical Information (OSTI.GOV)
George J. Hirasaki; Kishore K. Mohanty
2005-09-05
The objective of this report is to characterize the fluid properties and fluid-rock interactions that are needed for formation evaluation by NMR well logging. The advances made in the understanding of NMR fluid properties are summarized in a chapter written for an AAPG book on NMR well logging. This includes live oils, viscous oils, natural gas mixtures, and the relation between relaxation time and diffusivity. Oil based drilling fluids can have an adverse effect on NMR well logging if it alters the wettability of the formation. The effect of various surfactants on wettability and surface relaxivity are evaluated for silicamore » sand. The relation between the relaxation time and diffusivity distinguishes the response of brine, oil, and gas in a NMR well log. A new NMR pulse sequence in the presence of a field gradient and a new inversion technique enables the T{sub 2} and diffusivity distributions to be displayed as a two-dimensional map. The objectives of pore morphology and rock characterization are to identify vug connectivity by using X-ray CT scan, and to improve NMR permeability correlation. Improved estimation of permeability from NMR response is possible by using estimated tortuosity as a parameter to interpolate between two existing permeability models.« less
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Log-Based Recovery in Asynchronous Distributed Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kane, Kenneth Paul
1989-01-01
A log-based mechanism is described for restoring consistent states to replicated data objects after failures. Preserving a causal form of consistency based on the notion of virtual time is focused upon in this report. Causal consistency has been shown to apply to a variety of applications, including distributed simulation, task decomposition, and mail delivery systems. Several mechanisms have been proposed for implementing causally consistent recovery, most notably those of Strom and Yemini, and Johnson and Zwaenepoel. The mechanism proposed here differs from these in two major respects. First, a roll-forward style of recovery is implemented. A functioning process is never required to roll-back its state in order to achieve consistency with a recovering process. Second, the mechanism does not require any explicit information about the causal dependencies between updates. Instead, all necessary dependency information is inferred from the orders in which updates are logged by the object servers. This basic recovery technique appears to be applicable to forms of consistency other than causal consistency. In particular, it is shown how the recovery technique can be modified to support an atomic form of consistency (grouping consistency). By combining grouping consistency with casual consistency, it may even be possible to implement serializable consistency within this mechanism.
Research on improved edge extraction algorithm of rectangular piece
NASA Astrophysics Data System (ADS)
He, Yi-Bin; Zeng, Ya-Jun; Chen, Han-Xin; Xiao, San-Xia; Wang, Yan-Wei; Huang, Si-Yu
Traditional edge detection operators such as Prewitt operator, LOG operator and Canny operator, etc. cannot meet the requirements of the modern industrial measurement. This paper proposes a kind of image edge detection algorithm based on improved morphological gradient. It can be detect the image using structural elements, which deals with the characteristic information of the image directly. Choosing different shapes and sizes of structural elements to use together, the ideal image edge information can be detected. The experimental result shows that the algorithm can well extract image edge with noise, which is clearer, and has more detailed edges compared with the previous edge detection algorithm.
The aggregated unfitted finite element method for elliptic problems
NASA Astrophysics Data System (ADS)
Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.
2018-07-01
Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.
NASA Astrophysics Data System (ADS)
Karacan, C. Özgen; Olea, Ricardo A.
2014-06-01
Prediction of potential methane emission pathways from various sources into active mine workings or sealed gobs from longwall overburden is important for controlling methane and for improving mining safety. The aim of this paper is to infer strata separation intervals and thus gas emission pathways from standard well log data. The proposed technique was applied to well logs acquired through the Mary Lee/Blue Creek coal seam of the Upper Pottsville Formation in the Black Warrior Basin, Alabama, using well logs from a series of boreholes aligned along a nearly linear profile. For this purpose, continuous wavelet transform (CWT) of digitized gamma well logs was performed by using Mexican hat and Morlet, as the mother wavelets, to identify potential discontinuities in the signal. Pointwise Hölder exponents (PHE) of gamma logs were also computed using the generalized quadratic variations (GQV) method to identify the location and strength of singularities of well log signals as a complementary analysis. PHEs and wavelet coefficients were analyzed to find the locations of singularities along the logs. Using the well logs in this study, locations of predicted singularities were used as indicators in single normal equation simulation (SNESIM) to generate equi-probable realizations of potential strata separation intervals. Horizontal and vertical variograms of realizations were then analyzed and compared with those of indicator data and training image (TI) data using the Kruskal-Wallis test. A sum of squared differences was employed to select the most probable realization representing the locations of potential strata separations and methane flow paths. Results indicated that singularities located in well log signals reliably correlated with strata transitions or discontinuities within the strata. Geostatistical simulation of these discontinuities provided information about the location and extents of the continuous channels that may form during mining. If there is a gas source within their zone of influence, paths may develop and allow methane movement towards sealed or active gobs under pressure differentials. Knowledge gained from this research will better prepare mine operations for potential methane inflows, thus improving mine safety.
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
[Investigation of Elekta linac characteristics for VMAT].
Luo, Guangwen; Zhang, Kunyi
2012-01-01
The aim of this study is to investigate the characteristics of Elekta delivery system for volumetric modulated arc therapy (VMAT). Five VMAT plans were delivered in service mode and dose rates, and speed of gantry and MLC leaves were analyzed by log files. Results showed that dose rates varied between 6 dose rates. Gantry and MLC leaf speed dynamically varied during delivery. The technique of VMAT requires linac to dynamically control more parameters, and these key dynamic variables during VMAT delivery can be checked by log files. Quality assurance procedure should be carried out for VMAT related parameter.
NASA Astrophysics Data System (ADS)
Rowland, R. L., II; Vander Kaaden, K. E.; McCubbin, F. M.; Danielson, L. R.
2017-12-01
With the data returned from the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission, there are now numerous constraints on the physical and chemical properties of Mercury, including its surface composition. The high S and low FeO contents observed from MESSENGER suggest a low oxygen fugacity of the present materials on the planet's surface. Most of our understanding of elemental partitioning behavior comes from observations made on terrestrial rocks, but Mercury's oxygen fugacity is far outside the conditions of those samples, estimated at approximately 3-7 log units below the Iron-Wüstite (IW) oxygen buffer, several orders of magnitude more reducing than other terrestrial bodies we have data from. With limited oxygen available, lithophile elements may instead exhibit chalcophile, halophile, or siderophile behaviors. Furthermore, very few natural samples of rocks that formed under reducing conditions (e.g., enstatite chondrites, achondrites, aubrites) are available in our collections for examination of this change in geochemical affinity. Our goal is to determine the elemental partitioning behavior of typically lithophile elements at lower oxygen fugacity as a function of temperature and pressure. Experiments were conducted at 1 GPa in a 13 mm QUICKpress piston cylinder and at 4 GPa in an 880-ton multi-anvil press, at temperatures up to 1850°C. The composition of starting materials for the experiments were designed so the final run products contained metal, silicate melt, and sulfide melt phases. Oxygen fugacity was controlled in the experiments by adding silicon metal to the samples, in order to utilize the Si-SiO2 buffer, which is 5 log units more reducing than the IW buffer at our temperatures of interest. The target silicate melt composition was diopside (CaMgSi2O6) because measured surface compositions indicate partial melting of a pyroxene-rich mantle. The results of our experiments will aid in our understanding of the fate of elements during the differentiation and thermal evolution of Mercury and other highly reducing planetary bodies.
Sear, J W
2011-03-01
The present study examines the molecular basis of induction of anaesthesia by i.v. hypnotic agents using comparative molecular field analysis (CoMFA). ED(50) induction doses for 14 i.v. anaesthetics in human subjects (expressed as molar dose per kilogram body weight) were obtained from the literature. Immobilizing potency data for the same 14 agents (expressed as the EC(50) plasma free drug concentrations that abolish movement in response to a noxious stimulus in 50% patients) were taken from our previous publication. These data were used to form CoMFA models for the two aspects of anaesthetic activity. Molecular alignment was achieved by field-fit minimization techniques. The lead structure for both models was eltanolone. The final CoMFA model for the ED(50) induction dose was based on two latent variables, and explained 99.3% of the variance in observed activities. It showed good intrinsic predictability (cross-validated q(2)=0.849). The equivalent model for immobilizing activity was also based on two latent variables, with r(2)=0.988 and q(2)=0.852. Although there was a correlation between -log ED(50) and -log EC(50) (r(2)=0.779), comparison of the pharmacophore maps showed poor correlation for both electrostatic and steric regions when isocontours were constructed by linking lattice grid points, making the greatest 40% contributions; the relative contributions of electrostatic and steric interactions differing between the models (induction dose: 2.5:1; immobilizing activity 1.8:1). Comparison of two CoMFA activity models shows only small elements of commonality, suggesting that different molecular features may be responsible for these two properties of i.v. anaesthetics.
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-01-01
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-12-30
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.
Scalable Trust of Next-Generation Management (STRONGMAN)
2004-10-01
remote logins might be policy controlled to allow only strongly encrypted IPSec tunnels to log in remotely, to access selected files, etc. The...and Angelos D. Keromytis. Drop-in Security for Distributed and Portable Computing Elements. Emerald Journal of Internet Research. Electronic...Security and Privacy, pp. 17-31, May 1999. [2] S. M. Bellovin. Distributed Firewalls. ; login : magazine, special issue on security, November 1999. [3] M
NASA Astrophysics Data System (ADS)
Sengupta, A.; Kletzing, C.; Howk, R.; Kurth, W. S.
2017-12-01
An important goal of the Van Allen Probes mission is to understand wave particle interactions that can energize relativistic electron in the Earth's Van Allen radiation belts. The EMFISIS instrumentation suite provides measurements of wave electric and magnetic fields of wave features such as chorus that participate in these interactions. Geometric signal processing discovers structural relationships, e.g. connectivity across ridge-like features in chorus elements to reveal properties such as dominant angles of the element (frequency sweep rate) and integrated power along the a given chorus element. These techniques disambiguate these wave features against background hiss-like chorus. This enables autonomous discovery of chorus elements across the large volumes of EMFISIS data. At the scale of individual or overlapping chorus elements, topological pattern recognition techniques enable interpretation of chorus microstructure by discovering connectivity and other geometric features within the wave signature of a single chorus element or between overlapping chorus elements. Thus chorus wave features can be quantified and studied at multiple scales of spectral geometry using geometric signal processing techniques. We present recently developed computational techniques that exploit spectral geometry of chorus elements and whistlers to enable large-scale automated discovery, detection and statistical analysis of these events over EMFISIS data. Specifically, we present different case studies across a diverse portfolio of chorus elements and discuss the performance of our algorithms regarding precision of detection as well as interpretation of chorus microstructure. We also provide large-scale statistical analysis on the distribution of dominant sweep rates and other properties of the detected chorus elements.
Civelekler, Mustafa; Halili, Ismail; Gundogan, Faith C; Sobaci, Gungor
2009-01-01
Purpose: To investigate the value of temporal retinal nerve fiber layer (RNFLtemporal) thickness in the prediction of malingering. Materials and Methods: This prospective, cross-sectional study was conducted on 33 military conscripts with optic disc temporal pallor (ODTP) and 33 age-and sex-matched healthy controls. Initial visual acuity (VAi) and visual acuity after simulation examination techniques (VAaset) were assessed. The subjects whose VAaset were two or more lines higher than VAi were determined as malingerers. Thickness of the peripapillary RNFL was determined with OCT (Stratus OCT™, Carl Zeiss Meditec, Inc.). RNFLtemporal thickness of the subjects were categorized into one of the 1+ to 4+ groups according to 50% confidence interval (CI), 25% CI and 5% CI values which were assessed in the control group. The VAs were converted to LogMAR-VAs for statistical comparisons. Results: A significant difference was found only in the temporal quadrant of RNFL thickness in subjects with ODTP (P=0.002). Mean LogMAR-VA increased significantly after SETs (P<0.001). Sensitivity, specificity, positive and negative predictive values of categorized RNFLtemporal thickness in diagnosing malingering were 84.6%, 75.0%, 68.8%, 88.2%, respectively. ROC curve showed that RNFLtemporal thickness of 67.5 μm is a significant cut-off point in determining malingering (P=0.001, area under the curve:0.862). The correlations between LogMAR-VAs and RNFLtemporal thicknesses were significant; the correlation coefficient for LogMAR-VAi was lower than the correlation for LogMAR-VAaset (r=−0.447, P=0.009 for LogMAR-VAi; r=−0.676, P<0.001 for LogMAR-VAaset). Conclusions: RNFLtemporal thickness assessment may be a valuable tool in determining malingering in subjects with ODTP objectively. PMID:19700875
Depth optimal sorting networks resistant to k passive faults
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piotrow, M.
In this paper, we study the problem of constructing a sorting network that is tolerant to faults and whose running time (i.e. depth) is as small as possible. We consider the scenario of worst-case comparator faults and follow the model of passive comparator failure proposed by Yao and Yao, in which a faulty comparator outputs directly its inputs without comparison. Our main result is the first construction of an N-input, k-fault-tolerant sorting network that is of an asymptotically optimal depth {theta}(log N+k). That improves over the recent result of Leighton and Ma, whose network is of depth O(log N +more » k log log N/log k). Actually, we present a fault-tolerant correction network that can be added after any N-input sorting network to correct its output in the presence of at most k faulty comparators. Since the depth of the network is O(log N + k) and the constants hidden behind the {open_quotes}O{close_quotes} notation are not big, the construction can be of practical use. Developing the techniques necessary to show the main result, we construct a fault-tolerant network for the insertion problem. As a by-product, we get an N-input, O(log N)-depth INSERT-network that is tolerant to random faults, thereby answering a question posed by Ma in his PhD thesis. The results are based on a new notion of constant delay comparator networks, that is, networks in which each register is used (compared) only in a period of time of a constant length. Copies of such networks can be put one after another with only a constant increase in depth per copy.« less
Accuracy and precision of Legionella isolation by US laboratories in the ELITE program pilot study.
Lucas, Claressa E; Taylor, Thomas H; Fields, Barry S
2011-10-01
A pilot study for the Environmental Legionella Isolation Techniques Evaluation (ELITE) Program, a proficiency testing scheme for US laboratories that culture Legionella from environmental samples, was conducted September 1, 2008 through March 31, 2009. Participants (n=20) processed panels consisting of six sample types: pure and mixed positive, pure and mixed negative, pure and mixed variable. The majority (93%) of all samples (n=286) were correctly characterized, with 88.5% of samples positive for Legionella and 100% of negative samples identified correctly. Variable samples were incorrectly identified as negative in 36.9% of reports. For all samples reported positive (n=128), participants underestimated the cfu/ml by a mean of 1.25 logs with standard deviation of 0.78 logs, standard error of 0.07 logs, and a range of 3.57 logs compared to the CDC re-test value. Centering results around the interlaboratory mean yielded a standard deviation of 0.65 logs, standard error of 0.06 logs, and a range of 3.22 logs. Sampling protocol, treatment regimen, culture procedure, and laboratory experience did not significantly affect the accuracy or precision of reported concentrations. Qualitative and quantitative results from the ELITE pilot study were similar to reports from a corresponding proficiency testing scheme available in the European Union, indicating these results are probably valid for most environmental laboratories worldwide. The large enumeration error observed suggests that the need for remediation of a water system should not be determined solely by the concentration of Legionella observed in a sample since that value is likely to underestimate the true level of contamination. Published by Elsevier Ltd.
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
Successful Sampling Strategy Advances Laboratory Studies of NMR Logging in Unconsolidated Aquifers
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Knight, Rosemary; Müller-Petke, Mike; Auken, Esben; Barfod, Adrian A. S.; Ferré, Ty P. A.; Vilhelmsen, Troels N.; Johnson, Carole D.; Christiansen, Anders V.
2017-11-01
The nuclear magnetic resonance (NMR) technique has become popular in groundwater studies because it responds directly to the presence and mobility of water in a porous medium. There is a need to conduct laboratory experiments to aid in the development of NMR hydraulic conductivity models, as is typically done in the petroleum industry. However, the challenge has been obtaining high-quality laboratory samples from unconsolidated aquifers. At a study site in Denmark, we employed sonic drilling, which minimizes the disturbance of the surrounding material, and extracted twelve 7.6 cm diameter samples for laboratory measurements. We present a detailed comparison of the acquired laboratory and logging NMR data. The agreement observed between the laboratory and logging data suggests that the methodologies proposed in this study provide good conditions for studying NMR measurements of unconsolidated near-surface aquifers. Finally, we show how laboratory sample size and condition impact the NMR measurements.
Pathogen Reduction in Human Plasma Using an Ultrashort Pulsed Laser
Tsen, Shaw-Wei D.; Kingsley, David H.; Kibler, Karen; Jacobs, Bert; Sizemore, Sara; Vaiana, Sara M.; Anderson, Jeanne; Tsen, Kong-Thon; Achilefu, Samuel
2014-01-01
Pathogen reduction is a viable approach to ensure the continued safety of the blood supply against emerging pathogens. However, the currently licensed pathogen reduction techniques are ineffective against non-enveloped viruses such as hepatitis A virus, and they introduce chemicals with concerns of side effects which prevent their widespread use. In this report, we demonstrate the inactivation of both enveloped and non-enveloped viruses in human plasma using a novel chemical-free method, a visible ultrashort pulsed laser. We found that laser treatment resulted in 2-log, 1-log, and 3-log reductions in human immunodeficiency virus, hepatitis A virus, and murine cytomegalovirus in human plasma, respectively. Laser-treated plasma showed ≥70% retention for most coagulation factors tested. Furthermore, laser treatment did not alter the structure of a model coagulation factor, fibrinogen. Ultrashort pulsed lasers are a promising new method for chemical-free, broad-spectrum pathogen reduction in human plasma. PMID:25372037
Beddows, Patricia A; Mallon, Edward K
2018-02-09
A low-cost data logging platform is presented that provides long-term operation in remote or submerged environments. Three premade "breakout boards" from the open-source Arduino ecosystem are assembled into the core of the data logger. Power optimization techniques are presented which extend the operational life of this module-based design to >1 year on three alkaline AA batteries. Robust underwater housings are constructed for these loggers using PVC fittings. Both the logging platform and the enclosures, are easy to build and modify without specialized tools or a significant background in electronics. This combination turns the Cave Pearl data logger into a generalized prototyping system and this design flexibility is demonstrated with two field studies recording drip rates in a cave and water flow in a flooded cave system. This paper describes a complete DIY solution, suitable for a wide range of challenging deployment conditions.
Mallon, Edward K.
2018-01-01
A low-cost data logging platform is presented that provides long-term operation in remote or submerged environments. Three premade “breakout boards” from the open-source Arduino ecosystem are assembled into the core of the data logger. Power optimization techniques are presented which extend the operational life of this module-based design to >1 year on three alkaline AA batteries. Robust underwater housings are constructed for these loggers using PVC fittings. Both the logging platform and the enclosures, are easy to build and modify without specialized tools or a significant background in electronics. This combination turns the Cave Pearl data logger into a generalized prototyping system and this design flexibility is demonstrated with two field studies recording drip rates in a cave and water flow in a flooded cave system. This paper describes a complete DIY solution, suitable for a wide range of challenging deployment conditions. PMID:29425185
NASA Astrophysics Data System (ADS)
Maurya, S. P.; Singh, K. H.; Singh, N. P.
2018-05-01
In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.
Yao, Yongchao; Ju, Xiaodong; Lu, Junqiang; Men, Baiyong
2017-06-10
A logging-while-drilling (LWD) caliper is a tool used for the real-time measurement of a borehole diameter in oil drilling engineering. This study introduces the mechanical structure and working principle of a new LWD caliper based on ultrasonic distance measurement (UDM). The detection range is a major performance index of a UDM system. This index is determined by the blind zone length and remote reflecting interface detection capability of the system. To reduce the blind zone length and detect near the reflecting interface, a full bridge acoustic emission technique based on bootstrap gate driver (BGD) and metal-oxide-semiconductor field effect transistor (MOSFET) is designed by analyzing the working principle and impedance characteristics of a given piezoelectric transducer. To detect the remote reflecting interface and reduce the dynamic range of the received echo signals, the relationships between the echo amplitude and propagation distance of ultrasonic waves are determined. A signal compensation technique based on time-varying amplification theory, which can automatically change the gain according to the echo arrival time is designed. Lastly, the aforementioned techniques and corresponding circuits are experimentally verified. Results show that the blind zone length in the UDM system of the LWD caliper is significantly reduced and the capability to detect the remote reflecting interface is considerably improved.
Yao, Yongchao; Ju, Xiaodong; Lu, Junqiang; Men, Baiyong
2017-01-01
A logging-while-drilling (LWD) caliper is a tool used for the real-time measurement of a borehole diameter in oil drilling engineering. This study introduces the mechanical structure and working principle of a new LWD caliper based on ultrasonic distance measurement (UDM). The detection range is a major performance index of a UDM system. This index is determined by the blind zone length and remote reflecting interface detection capability of the system. To reduce the blind zone length and detect near the reflecting interface, a full bridge acoustic emission technique based on bootstrap gate driver (BGD) and metal-oxide-semiconductor field effect transistor (MOSFET) is designed by analyzing the working principle and impedance characteristics of a given piezoelectric transducer. To detect the remote reflecting interface and reduce the dynamic range of the received echo signals, the relationships between the echo amplitude and propagation distance of ultrasonic waves are determined. A signal compensation technique based on time-varying amplification theory, which can automatically change the gain according to the echo arrival time is designed. Lastly, the aforementioned techniques and corresponding circuits are experimentally verified. Results show that the blind zone length in the UDM system of the LWD caliper is significantly reduced and the capability to detect the remote reflecting interface is considerably improved. PMID:28604603
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Drinking water ozone disinfection systems measure ozone residual concentration, C, for regulatory compliance reporting of concentration-times-time (CT), and the resultant log-inactivation of virus, Giardia and Cryptosporidium. The indigotrisulfonate (ITS) colorimetric procedure i...
Factors influencing woodlands of southwestern North Dakota
Michele M. Girard; Harold Goetz; Ardell J. Bjugstad
1987-01-01
Literature pertaining to woodlands of southwestern North Dakota is reviewed. Woodland species composition and distribution, and factors influencing woodland ecosystems such as climate, logging, fire, and grazing are described. Potential management and improvement techniques using vegetation and livestock manipulation have been suggested.
Preparation and testing of drilled shafts with self-consolidating concrete.
DOT National Transportation Integrated Search
2012-06-01
In this study, self-consolidating concrete (SCC) was evaluated in drilled shafts and the : integrity of drilled shafts was determined using cross-hole sonic logging (CSL), a low-strain : nondestructive integrity testing technique. SCC has very high f...
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
SU-F-T-233: Evaluation of Treatment Delivery Parameters Using High Resolution ELEKTA Log Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kabat, C; Defoor, D; Alexandrian, A
2016-06-15
Purpose: As modern linacs have become more technologically advanced with the implementation of IGRT and IMRT with HDMLCs, a requirement for more elaborate tracking techniques to monitor components’ integrity is paramount. ElektaLog files are generated every 40 milliseconds, which can be analyzed to track subtle changes and provide another aspect of quality assurance. This allows for constant monitoring of fraction consistency in addition to machine reliability. With this in mind, it was the aim of the study to evaluate if ElektaLog files can be utilized for linac consistency QA. Methods: ElektaLogs were reviewed for 16 IMRT patient plans with >16more » fractions. Logs were analyzed by creating fluence maps from recorded values of MLC locations, jaw locations, and dose per unit time. Fluence maps were then utilized to calculate a 2D gamma index with a 2%–2mm criteria for each fraction. ElektaLogs were also used to analyze positional errors for MLC leaves and jaws, which were used to compute an overall error for the MLC banks, Y-jaws, and X-jaws by taking the root-meansquare value of the individual recorded errors during treatment. Additionally, beam on time was calculated using the number of ElektaLog file entries within the file. Results: The average 2D gamma for all 16 patient plans was found to be 98.0±2.0%. Recorded gamma index values showed an acceptable correlation between fractions. Average RMS values for MLC leaves and the jaws resulted in a leaf variation of roughly 0.3±0.08 mm and jaw variation of about 0.15±0.04 mm, both of which fall within clinical tolerances. Conclusion: The use of ElektaLog files for day-to-day evaluation of linac integrity and patient QA can be utilized to allow for reliable analysis of system accuracy and performance.« less
El-Kattan, A F; Asbill, C S; Michniak, B B
2000-04-05
The percutaneous permeation of hydrocortisone (HC) was investigated in hairless mouse skin after application of an alcoholic hydrogel using a diffusion cell technique. The formulations contained one of 12 terpenes, the selection of which was based on an increase in their lipophilicity (log P 1.06-5.36). Flux, cumulative receptor concentrations, skin content, and lag time of HC were measured over 24 h and compared with control gels (containing no terpene). Furthermore, HC skin content and the solubility of HC in the alcoholic hydrogel solvent mixture in the presence of terpene were determined, and correlated to the enhancing activity of terpenes. The in vitro permeation experiments with hairless mouse skin revealed that the terpene enhancers varied in their ability to enhance the flux of HC. Nerolidol which possessed the highest lipophilicity (log P = 5.36+/-0.38) provided the greatest enhancement for HC flux (35.3-fold over control). Fenchone (log P = 2.13+/-0.30) exhibited the lowest enhancement of HC flux (10.1-fold over control). In addition, a linear relationship was established between the log P of terpenes and the cumulative amount of HC in the receptor after 24 h (Q(24)). Nerolidol, provided the highest Q(24) (1733+/-93 microg/cm(2)), whereas verbenone produced the lowest Q(24) (653+/-105 microg/cm(2)). Thymol provided the lowest HC skin content (1151+/-293 microg/g), while cineole produced the highest HC skin content (18999+/-5666 microg/g). No correlation was established between the log P of enhancers and HC skin content. A correlation however, existed between the log P of terpenes and the lag time. As log P increased, a linear decrease in lag time was observed. Cymene yielded the shortest HC lag time, while fenchone produced the longest lag time. Also, the increase in the log P of terpenes resulted in a proportional increase in HC solubility in the formulation solvent mixture.
Bolann, B J; Rahil-Khazen, R; Henriksen, H; Isrenn, R; Ulvik, R J
2007-01-01
Commonly used techniques for trace-element analysis in human biological material are flame atomic absorption spectrometry (FAAS), graphite furnace atomic absorption spectrometry (GFAAS), inductively coupled plasma atomic emission spectrometry (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS). Elements that form volatile hydrides, first of all mercury, are analysed by hydride generation techniques. In the absorption techniques the samples are vaporized into free, neutral atoms and illuminated by a light source that emits the atomic spectrum of the element under analysis. The absorbance gives a quantitative measure of the concentration of the element. ICP-AES and ICP-MS are multi-element techniques. In ICP-AES the atoms of the sample are excited by, for example, argon plasma at very high temperatures. The emitted light is directed to a detector, and the optical signals are processed to values for the concentrations of the elements. In ICP-MS a mass spectrometer separates and detects ions produced by the ICP, according to their mass-to-charge ratio. Dilution of biological fluids is commonly needed to reduce the effect of the matrix. Digestion using acids and microwave energy in closed vessels at elevated pressure is often used. Matrix and spectral interferences may cause problems. Precautions should be taken against trace-element contamination during collection, storage and processing of samples. For clinical problems requiring the analysis of only one or a few elements, the use of FAAS may be sufficient, unless the higher sensitivity of GFAAS is required. For screening of multiple elements, however, the ICP techniques are preferable.
NASA Astrophysics Data System (ADS)
Hoyer, D.; Rauch, T.; Werner, K.; Kruk, J. W.
2018-04-01
The metal abundances in the atmospheres of hot white dwarfs (WDs) entering the cooling sequence are determined by the preceding Asymptotic Giant Branch (AGB) evolutionary phase and, subsequently, by the onset of gravitational settling and radiative levitation. In this paper, we investigate three hot He-rich WDs, which are believed to result from a late He-shell flash. During such a flash, the He-rich intershell matter is dredged up and dominates the surface chemistry. Hence, in contrast to the usual H-rich WDs, their spectra allow direct access to s-process element abundances in the intershell that were synthesized during the AGB stage. In order to look for trans-iron group elements (atomic number Z > 29), we performed a non-local thermodynamic equilibrium model atmosphere analysis of new ultraviolet spectra taken with the Cosmic Origins Spectrograph aboard the Hubble Space Telescope. One of our program stars is of PG 1159 spectral type; this star, PG 1707+427, has effective temperature Teff = 85 000 K, and surface gravity logg = 7.5. The two other stars are DO white dwarfs: WD 0111+002 has Teff = 58 000 K and log g = 7.7, and PG 0109+111 has Teff = 70 000 K and log g = 8.0. These stars trace the onset of element diffusion during early WD evolution. While zinc is the only trans-iron element we could detect in the PG 1159 star, both DOs exhibit lines from Zn, Ga, Ge, Se; one additionally exhibits lines from Sr, Sn, Te, and I and the other from As. Generally, the trans-iron elements are very abundant in the DOs, meaning that radiative levitation must be acting. Most extreme is the almost six orders of magnitude oversolar abundance of tellurium in PG 0109+111. In terms of mass fraction, it is the most abundant metal in the atmosphere. The two DOs join the hitherto unique hot DO RE 0503-289, in which 14 trans-iron elements had even been identified. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26666.Based on observations made with the NASA-CNES-CSA Far Ultraviolet Spectroscopic Explorer.
Lithium in Stellar Atmospheres: Observations and Theory
NASA Astrophysics Data System (ADS)
Lyubimkov, L. S.
2016-09-01
Of all the light elements, lithium is the most sensitive indicator of stellar evolution. This review discusses current data on the abundance of lithium in the atmospheres of A-, F-, G-, and K-stars of different types, as well as the consistency of these data with theoretical predictions. The variety of observed Li abundances is illustrated by the following objects in different stages of evolution: (1) Old stars in the galactic halo, which have a lithium abundance logɛ(Li)=2.2 (the "lithium plateau") that appears to be 0.5 dex lower than the primordial abundance predicted by cosmological models. (2) Young stars in the galactic disk, which have been used to estimate the contemporary initial lithium abundance logɛ(Li)=3.2±0.1 for stars in the Main sequence. Possible sources of lithium enrichment in the interstellar medium during evolution of the galaxy are discussed. (3) Evolving FGK dwarfs in the galactic disk, which have lower logɛ(Li) for lower effective temperature T eff and mass M. The "lithium dip" near T eff ~6600 K in the distribution of logɛ(Li) with respect to T eff in old clusters is discussed. (4) FGK giants and supergiants, of which most have no lithium at all. This phenomenon is consistent with rotating star model calculations. (5) Lithium rich cold giants with logɛ(Li) ≥ 2.0, which form a small, enigmatic group. Theoretical models with rotation can explain the existence of these stars only in the case of low initial rotation velocities V 0 <50 km/s. In all other cases it is necessary to assume recent synthesis of lithium (capture of a giant planet is an alternative). (6) Magnetic Ap-stars, where lithium is concentrated in spots located at the magnetic poles. There the lithium abundance reaches logɛ(Li)=6. Discrepancies between observations and theory are noted for almost all the stars discussed in this review.
Criticality Characteristics of Current Oil Price Dynamics
NASA Astrophysics Data System (ADS)
Drożdż, S.; Kwapień, J.; Oświęcimka, P.
2008-10-01
Methodology that recently leads us to predict to an amazing accuracy the date (July 11, 2008) of reverse of the oil price up trend is briefly summarized and some further aspects of the related oil price dynamics elaborated. This methodology is based on the concept of discrete scale invariance whose finance-prediction-oriented variant involves such elements as log-periodic self-similarity, the universal preferred scaling factor λ≈2, and allows a phenomenon of the "super-bubble". From this perspective the present (as of August 22, 2008) violent - but still log-periodically decelerating - decrease of the oil prices is associated with the decay of such a "super-bubble" that has started developing about one year ago on top of the longer-term oil price increasing phase (normal bubble) whose ultimate termination is evaluated to occur in around mid 2010.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This volume contains summaries of FY 1979 government-sponsored environment and safety research related to energy arranged by log number, which groups the projects by reporting agency. The log number is a unique number assigned to each project from a block of numbers set aside for each contributing agency. Information elements included in the summary listings are project title, principal investigators, research organization, project number, contract number, supporting organization, funding level, related energy sources with numbers indicating percentages of effort devoted to each, and R and D categories. A brief description of each project is given, and this is followed bymore » subject index terms that were assigned for computer searching and for generating the printed subject index in the back of this volume.« less
Madison, Matthew J; Bradshaw, Laine P
2015-06-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.
Nuclear Well Log Properties of Natural Gas Hydrate Reservoirs
NASA Astrophysics Data System (ADS)
Burchwell, A.; Cook, A.
2015-12-01
Characterizing gas hydrate in a reservoir typically involves a full suite of geophysical well logs. The most common method involves using resistivity measurements to quantify the decrease in electrically conductive water when replaced with gas hydrate. Compressional velocity measurements are also used because the gas hydrate significantly strengthens the moduli of the sediment. At many gas hydrate sites, nuclear well logs, which include the photoelectric effect, formation sigma, carbon/oxygen ratio and neutron porosity, are also collected but often not used. In fact, the nuclear response of a gas hydrate reservoir is not known. In this research we will focus on the nuclear log response in gas hydrate reservoirs at the Mallik Field at the Mackenzie Delta, Northwest Territories, Canada, and the Gas Hydrate Joint Industry Project Leg 2 sites in the northern Gulf of Mexico. Nuclear logs may add increased robustness to the investigation into the properties of gas hydrates and some types of logs may offer an opportunity to distinguish between gas hydrate and permafrost. For example, a true formation sigma log measures the thermal neutron capture cross section of a formation and pore constituents; it is especially sensitive to hydrogen and chlorine in the pore space. Chlorine has a high absorption potential, and is used to determine the amount of saline water within pore spaces. Gas hydrate offers a difference in elemental composition compared to water-saturated intervals. Thus, in permafrost areas, the carbon/oxygen ratio may vary between gas hydrate and permafrost, due to the increase of carbon in gas hydrate accumulations. At the Mallik site, we observe a hydrate-bearing sand (1085-1107 m) above a water-bearing sand (1107-1140 m), which was confirmed through core samples and mud gas analysis. We observe a decrease in the photoelectric absorption of ~0.5 barnes/e-, as well as an increase in the formation sigma readings of ~5 capture units in the water-bearing sand as compared to the hydrate sand interval. This is further correlated with the carbon/oxygen ratio showing a decrease of 20% in the water sand compared to the hydrate sand above. In future research, we will quantify the effect of gas hydrate on the nuclear logs at the Mallik well and compare it to wells in the Gulf of Mexico.
CRITICAL ILLUMINATION AND FLICKER FREQUENCY IN RELATED FISHES
Crozier, W. J.; Wolf, E.; Zerrahn-Wolf, Gertrud
1937-01-01
Flicker response curves have been obtained at 21.5°C. for three genera of fresh water teleosts: Enneacanthus (sunfish), Xiphophorus (swordtail), Platypoecilius (Platy), by the determination of mean critical intensities for response at fixed flicker frequencies, and for a certain homogeneous group of backcross hybrids of swordtail x Platy (Black Helleri). The curves exhibit marked differences in form and proportions. The same type of analysis is applicable to each, however. A low intensity rod-governed section has added to it a more extensive cone portion. Each part is accurately described by the equation F = Fmax./(1 + e -p log-p logI/Ii), where F = flicker frequency, I = associated mean critical intensity, and Ii is the intensity at the inflection point of the sigmoid curve relating F to log I. There is no correlation between quantitative features of the rod and cone portions. Threshold intensities, p, Ii, and Fmax. are separately and independently determined. The hybrid Black Helleri show quantitative agreement with the Xiphophorus parental stock in the values of p for rods and cones, and in the cone Fmax.; the rod Fmax. is very similar to that for the Platy stock; the general level of effective intensities is rather like that of the Platy form. This provides, among other things, a new kind of support for the duplicity doctrine. Various races of Platypoecilius maculatus, and P. variatus, give closely agreeing values of Im at different flicker frequencies; and two species of sunfish also agree. The effect of cross-breeding is thus not a superficial thing. It indicates the possibility of further genetic investigation. The variability of the critical intensity for response to flicker follows the rules previously found to hold for other forms. The variation is the expression of a property of the tested organism. It is shown that, on the assumption of a frequency distribution of receptor element thresholds as a function of log I, with fluctuation in the excitabilities of the marginally excited elements, it is to be expected that the dispersion of critical flicker frequencies in repeated measurements will pass through a maximum as log I is increased, whereas the dispersion of critical intensities will be proportional to Im; and that the proportionality factor in the case of different organisms bears no relation to the form or position of the respective curves relating mean critical intensity to flicker frequency. These deductions agree with the experimental findings. PMID:19873037
Water analysis via portable X-ray fluorescence spectrometry
NASA Astrophysics Data System (ADS)
Pearson, Delaina; Chakraborty, Somsubhra; Duda, Bogdan; Li, Bin; Weindorf, David C.; Deb, Shovik; Brevik, Eric; Ray, D. P.
2017-01-01
Rapid, in-situ elemental water analysis would be an invaluable tool in studying polluted and/or salt-impacted waters. Analysis of water salinity has commonly used electrical conductance (EC); however, the identity of the elements responsible for the salinity are not revealed using EC. Several studies have established the viability of using portable X-ray fluorescence (PXRF) spectrometry for elemental data analysis of soil, sediment, and other matrices. However, the accuracy of PXRF is known to be affected while scanning moisture-laden soil samples. This study used PXRF elemental data in water samples to predict water EC. A total of 256 water samples, from 10 different countries were collected and analyzed via PXRF, inductively coupled plasma atomic emission spectroscopy (ICP-AES), and a digital salinity bridge. The PXRF detected some elements more effectively than others, but overall results indicated that PXRF can successfully predict water EC via quantifying Cl in water samples (validation R2 and RMSE of 0.77 and 0.95 log μS cm-1, respectively). The findings of this study elucidated the potential of PXRF for future analysis of pollutant and/or metal contaminated waters.
Magnetic resonance imaging in laboratory petrophysical core analysis
NASA Astrophysics Data System (ADS)
Mitchell, J.; Chandrasekera, T. C.; Holland, D. J.; Gladden, L. F.; Fordham, E. J.
2013-05-01
Magnetic resonance imaging (MRI) is a well-known technique in medical diagnosis and materials science. In the more specialized arena of laboratory-scale petrophysical rock core analysis, the role of MRI has undergone a substantial change in focus over the last three decades. Initially, alongside the continual drive to exploit higher magnetic field strengths in MRI applications for medicine and chemistry, the same trend was followed in core analysis. However, the spatial resolution achievable in heterogeneous porous media is inherently limited due to the magnetic susceptibility contrast between solid and fluid. As a result, imaging resolution at the length-scale of typical pore diameters is not practical and so MRI of core-plugs has often been viewed as an inappropriate use of expensive magnetic resonance facilities. Recently, there has been a paradigm shift in the use of MRI in laboratory-scale core analysis. The focus is now on acquiring data in the laboratory that are directly comparable to data obtained from magnetic resonance well-logging tools (i.e., a common physics of measurement). To maintain consistency with well-logging instrumentation, it is desirable to measure distributions of transverse (T2) relaxation time-the industry-standard metric in well-logging-at the laboratory-scale. These T2 distributions can be spatially resolved over the length of a core-plug. The use of low-field magnets in the laboratory environment is optimal for core analysis not only because the magnetic field strength is closer to that of well-logging tools, but also because the magnetic susceptibility contrast is minimized, allowing the acquisition of quantitative image voxel (or pixel) intensities that are directly scalable to liquid volume. Beyond simple determination of macroscopic rock heterogeneity, it is possible to utilize the spatial resolution for monitoring forced displacement of oil by water or chemical agents, determining capillary pressure curves, and estimating wettability. The history of MRI in petrophysics is reviewed and future directions considered, including advanced data processing techniques such as compressed sensing reconstruction and Bayesian inference analysis of under-sampled data. Although this review focuses on rock core analysis, the techniques described are applicable in a wider context to porous media in general, such as cements, soils, ceramics, and catalytic materials.
DiFilippo, Erica L.; Eganhouse, Robert P.
2010-01-01
Solid-phase microextraction (SPME) has shown potential as an in situ passive-sampling technique in aquatic environments. The reliability of this method depends upon accurate determination of the partition coefficient between the fiber coating and water (Kf). For some hydrophobic organic compounds (HOCs), Kf values spanning 4 orders of magnitude have been reported for polydimethylsiloxane (PDMS) and water. However, 24% of the published data examined in this review did not pass the criterion for negligible depletion, resulting in questionable Kf values. The range in reported Kf is reduced to just over 2 orders of magnitude for some polychlorinated biphenyls (PCBs) when these questionable values are removed. Other factors that could account for the range in reported Kf, such as fiber-coating thickness and fiber manufacturer, were evaluated and found to be insignificant. In addition to accurate measurement of Kf, an understanding of the impact of environmental variables, such as temperature and ionic strength, on partitioning is essential for application of laboratory-measured Kf values to field samples. To date, few studies have measured Kf for HOCs at conditions other than at 20 degrees or 25 degrees C in distilled water. The available data indicate measurable variations in Kf at different temperatures and different ionic strengths. Therefore, if the appropriate environmental variables are not taken into account, significant error will be introduced into calculated aqueous concentrations using this passive sampling technique. A multiparameter linear solvation energy relationship (LSER) was developed to estimate log Kf in distilled water at 25 degrees C based on published physicochemical parameters. This method provided a good correlation (R2 = 0.94) between measured and predicted log Kf values for several compound classes. Thus, an LSER approach may offer a reliable means of predicting log Kf for HOCs whose experimental log Kf values are presently unavailable. Future research should focus on understanding the impact of environmental variables on Kf. Obtaining the data needed for an LSER approach to estimate Kf for all environmentally relevant HOCs would be beneficial to the application of SPME as a passive-sampling technique.
Binary tree eigen solver in finite element analysis
NASA Technical Reports Server (NTRS)
Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.
1993-01-01
This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.
Using Downhole Probes to Locate and Characterize Buried Transuranic and Mixed Low Level Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinman, Donald K; Bramblett, Richard L; Hertzog, Russel C
2012-06-25
Borehole logging probes were developed and tested to locate and quantify transuranic elements in subsurface disposal areas and in contaminated sites at USDOE Weapons Complex sites. A new method of measuring very high levels of chlroine in the subsurface was developed using pulsed neutron technology from oilfield applications. The probes were demonstrated at the Hanford site in wells containing plutonium and other contaminants.
Verification testing of the Leopold Ultrabar Mark III Ultrafiltration Systems was conducted from February 3-March9, 1999. The performance claim evaluated during field testing of the Leopold Ultrabar Mark III Ultrafiltration system was that the system is capable of a minimum 3 log...
MANCaLog: A Logic for Multi-Attribute Network Cascades
2013-01-01
influence function , whose precise effects will be described later on when we discuss the semantics. As a result, a rule consists of four major parts...i) an influence function , (ii) neighbor criteria, (iii) target criteria, and (iv) a target. Intuitively, (i) specifies how the neighbors influence the...in terms of these elements. First, we define influence functions and neighbor criteria. Definition 2.6 ( Influence Function ). An influence function is a
SP_Ace: Stellar Parameters And Chemical abundances Estimator
NASA Astrophysics Data System (ADS)
Boeche, C.; Grebel, E. K.
2018-05-01
SP_Ace (Stellar Parameters And Chemical abundances Estimator) estimates the stellar parameters Teff, log g, [M/H], and elemental abundances. It employs 1D stellar atmosphere models in Local Thermodynamic Equilibrium (LTE). The code is highly automated and suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). A web service for calculating these values with the software is also available.
NASA Technical Reports Server (NTRS)
Moehler, S.; Sweigart, A. V.; Landsman, W. B.; Heber, U.
2000-01-01
Atmospheric parameters (T(sub eff), log g), masses and helium abundances are derived for 42 hot horizontal branch (HB) stars in the globular cluster NGC6752. For 19 stars we derive magnesium and iron abundances as well and find that iron is enriched by a factor of 50 on average with respect to the cluster abundance whereas the magnesium abundances are consistent with the cluster abundance. Radiation pressure may levitate heavy elements like iron to the surface of the star in a diffusive process. Taking into account the enrichment of heavy elements in our spectroscopic analyses we find that high iron abundances can explain part, but not all, of the problem of anomalously low gravities along the blue HB. The blue HB stars cooler than about 15,100 K and the sdB stars (T(sub eff) greater than or = 20,000 K) agree well with canonical theory when analysed with metal-rich ([M/H] = +0.5) model atmospheres, but the stars in between these two groups remain offset towards lower gravities and masses. Deep Mixing in the red giant progenitor phase is discussed as another mechanism that may influence the position of the blue HB stars in the (T(sub eff), log g)-plane but not their masses.
Peculiar Abundances Observed in the Hot Subdwarf OB Star LB 3241
NASA Astrophysics Data System (ADS)
Chayer, Pierre; Dupuis, J.; Dixon, W. V.; Giguere, E.
2010-01-01
We present a spectral synthesis analysis of the hot subdwarf OB star LB 3241. The analysis is based on spectra obtained by the Far Ultraviolet Spectroscopic Explorer (FUSE). With an effective temperature of 41,000 K and a gravity of log g = 5.7, the position of LB 3241 in a Teff-log g diagram suggests that it has evolved from the extreme horizontal branch. Such stars evolve into white dwarfs without ascending the asymptotic giant branch after the helium core exhaustion. Arsenic (Z = 33), selenium (34), and tellurium (52) are observed in the atmosphere of LB 3241, and are a first for a hot subdwarf star. LB 3241 shows peculiar chemical abundances that exhibit trends observed in cooler sdB stars. The content of its atmosphere in light elements is about a factor ten lower than that of the Sun, except for nitrogen which has a solar abundance. The Fe abundance is consistent with a solar abundance, but abundances of elements beyond the iron peak (As, Se, Te, Pb) show enrichments over the solar values by factors ranging from 10 to 300. These observations suggest that competing mechanisms must counterbalance the effects of the downward diffusion. The FUSE observations also suggest that LB 3241 is a radial velocity variable.
NASA Astrophysics Data System (ADS)
Haris, A.; Nafian, M.; Riyanto, A.
2017-07-01
Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.
Paillet, Frederick L.; Crowder, R.E.
1996-01-01
Quantitative analysis of geophysical logs in ground-water studies often involves at least as broad a range of applications and variation in lithology as is typically encountered in petroleum exploration, making such logs difficult to calibrate and complicating inversion problem formulation. At the same time, data inversion and analysis depend on inversion model formulation and refinement, so that log interpretation cannot be deferred to a geophysical log specialist unless active involvement with interpretation can be maintained by such an expert over the lifetime of the project. We propose a generalized log-interpretation procedure designed to guide hydrogeologists in the interpretation of geophysical logs, and in the integration of log data into ground-water models that may be systematically refined and improved in an iterative way. The procedure is designed to maximize the effective use of three primary contributions from geophysical logs: (1) The continuous depth scale of the measurements along the well bore; (2) The in situ measurement of lithologic properties and the correlation with hydraulic properties of the formations over a finite sample volume; and (3) Multiple independent measurements that can potentially be inverted for multiple physical or hydraulic properties of interest. The approach is formulated in the context of geophysical inversion theory, and is designed to be interfaced with surface geophysical soundings and conventional hydraulic testing. The step-by-step procedures given in our generalized interpretation and inversion technique are based on both qualitative analysis designed to assist formulation of the interpretation model, and quantitative analysis used to assign numerical values to model parameters. The approach bases a decision as to whether quantitative inversion is statistically warranted by formulating an over-determined inversion. If no such inversion is consistent with the inversion model, quantitative inversion is judged not possible with the given data set. Additional statistical criteria such as the statistical significance of regressions are used to guide the subsequent calibration of geophysical data in terms of hydraulic variables in those situations where quantitative data inversion is considered appropriate.
Efficient finite element simulation of slot spirals, slot radomes and microwave structures
NASA Technical Reports Server (NTRS)
Gong, J.; Volakis, J. L.
1995-01-01
This progress report contains the following two documents: (1) 'Efficient Finite Element Simulation of Slot Antennas using Prismatic Elements' - A hybrid finite element-boundary integral (FE-BI) simulation technique is discussed to treat narrow slot antennas etched on a planar platform. Specifically, the prismatic elements are used to reduce the redundant sampling rates and ease the mesh generation process. Numerical results for an antenna slot and frequency selective surfaces are presented to demonstrate the validity and capability of the technique; and (2) 'Application and Design Guidelines of the PML Absorber for Finite Element Simulations of Microwave Packages' - The recently introduced perfectly matched layer (PML) uniaxial absorber for frequency domain finite element simulations has several advantages. In this paper we present the application of PML for microwave circuit simulations along with design guidelines to obtain a desired level of absorption. Different feeding techniques are also investigated for improved accuracy.
Objective straylight assessment of the human eye with a novel device
NASA Astrophysics Data System (ADS)
Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Klemm, Matthias; Haueisen, Jens; Baumgarten, Daniel
2016-03-01
Forward scattered light from the anterior segment of the human eye can be measured by Shack-Hartmann (SH) wavefront aberrometers with limited visual angle. We propose a novel Point Spread Function (PSF) reconstruction algorithm based on SH measurements with a novel measurement devise to overcome these limitations. In our optical setup, we use a Digital Mirror Device as variable field stop, which is conventionally a pinhole suppressing scatter and reflections. Images with 21 different stop diameters were captured and from each image the average subaperture image intensity and the average intensity of the pupil were computed. The 21 intensities represent integral values of the PSF which is consequently reconstructed by derivation with respect to the visual angle. A generalized form of the Stiles-Holladay-approximation is fitted to the PSF resulting in a stray light parameter Log(IS). Additionaly the transmission loss of eye is computed. For the proof of principle, a study on 13 healthy young volunteers was carried out. Scatter filters were positioned in front of the volunteer's eye during C-Quant and scatter measurements to generate straylight emulating scatter in the lens. The straylight parameter is compared to the C-Quant measurement parameter Log(ISC) and scatter density of the filters SDF with a partial correlation. Log(IS) shows significant correlation with the SDF and Log(ISC). The correlation is more prominent between Log(IS) combined with the transmission loss and the SDF and Log(ISC). Our novel measurement and reconstruction technique allow for objective stray light analysis of visual angles up to 4 degrees.
Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi A; Oudkerk, Matthijs; van Ooijen, Peter M A
2016-01-01
To perform a post-deployment usability evaluation of a radiology Picture Archiving and Communication System (PACS) client based on pattern mining of user interaction log data, and to assess the usefulness of this approach compared to a field study. All user actions performed on the PACS client were logged for four months. A data mining technique called closed sequential pattern mining was used to automatically extract frequently occurring interaction patterns from the log data. These patterns were used to identify usability issues with the PACS. The results of this evaluation were compared to the results of a field study based usability evaluation of the same PACS client. The interaction patterns revealed four usability issues: (1) the display protocols do not function properly, (2) the line measurement tool stays active until another tool is selected, rather than being deactivated after one use, (3) the PACS's built-in 3D functionality does not allow users to effectively perform certain 3D-related tasks, (4) users underuse the PACS's customization possibilities. All usability issues identified based on the log data were also found in the field study, which identified 48 issues in total. Post-deployment usability evaluation based on pattern mining of user interaction log data provides useful insights into the way users interact with the radiology PACS client. However, it reveals few usability issues compared to a field study and should therefore not be used as the sole method of usability evaluation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DEVELOPMENT AND APPLICATION OF BOREHOLE FLOWMETERS FOR ENVIRONMENTAL ASSESSMENT
In order to understand the origin of contaminant plumes and infer their future migration, one requires a knowledge of the hydraulic conductivity (K) distribution. n many aquifers, the borehole flowmeter offers the most direct technique available for developing a log of hydraulic ...
A Reading-Writing Connection in the Content Areas (Secondary Perspectives).
ERIC Educational Resources Information Center
Journal of Reading, 1990
1990-01-01
Discusses instructional activities designed to foster the reading-writing connection in the content area classroom. Describes the use of "possible sentences," learning logs, freewriting, dialogue journals, the RAFT technique (role, audience, format, and topic), and the "opinion-proof" organization strategy. (RS)
NASA Astrophysics Data System (ADS)
Blake, Will; Walsh, Rory; Bidin, Kawi; Annammala, Kogila
2015-04-01
It is widely recognised that commercial logging and conversion of tropical rainforest to oil palm plantation leads to enhanced fluvial sediment flux to the coastal zone but the dynamics of delivery and mechanisms that act to retain sediment and nutrients within rainforest ecosystems, e.g. riparian zone and floodplain storage, are poorly understood and underexploited as a management tool. While accretion of lateral in-channel bench deposits in response to forest clearance has been demonstrated in temperate landscapes, their development and value as sedimentary archives of catchment response to human disturbance remains largely unexplored in tropical rainforest river systems. Working within the Segama River basin, Sabah, Malaysian Borneo, this study aimed to test the hypothesis that (1) lateral bench development in tropical rainforest rivers systems is enhanced by upstream catchment disturbance and that (2) the sedimentary record of these deposits can be used to infer changes in sediment provenance and intensification of sediment flux associated with logging activities. Sediment cores were taken from in-channel bench deposits with upstream catchment contributing areas of 721 km2 and 2800 km2 respectively. Accretion rates were determined using fallout 210Pb and 137Cs and the timing of peak accumulation was shown to correspond exactly with the known temporal pattern of logging and associated fluvial sediment response over the period 1980 to present following low pre-logging rates. Major and minor element geochemistry of deposits was used to assess the degree of weathering that deposited sediment had experienced. This was linked to surface (heavily weathered) and subsurface (less weathered) sediment sources relating to initial disturbance by logging and post-logging landsliding responses respectively. A shift in the dominant source of deposited material from surface (i.e. topsoil) to subsurface (i.e. relatively unweathered subsoil close to bedrock) origin was observed to coincide with the increase in accretion rates following logging of steep headwater slopes. Coherence of sedimentary, monitoring and observational evidence demonstrates that in-channel bench deposits offer a previously unexplored sedimentary archive of catchment response to logging in tropical rainforest systems and a tool for evaluating the erosional responses of ungauged basins. In-channel bench development due to catchment disturbance may augment ecosystem services provided by the riparian corridors of larger rivers and process knowledge gained from sedimentary archives can be used to underpin future riparian and catchment forest management strategies.
Binary Detection using Multi-Hypothesis Log-Likelihood, Image Processing
2014-03-27
geosynchronous orbit and other scenarios important to the USAF. 2 1.3 Research objectives The question posed in this thesis is how well, if at all, can a...is important to compare them to another modern technique. The third objective is to compare results from another image detection method, specifically...Although adaptive optics is an important technique in moving closer to diffraction limited imaging, it is not currently a practical solution for all
Lone, Ayesha; Anany, Hany; Hakeem, Mohammed; Aguis, Louise; Avdjian, Anne-Claire; Bouget, Marina; Atashi, Arash; Brovko, Luba; Rochefort, Dominic; Griffiths, Mansel W
2016-01-18
Due to lack of adequate control methods to prevent contamination in fresh produce and growing consumer demand for natural products, the use of bacteriophages has emerged as a promising approach to enhance safety of these foods. This study sought to control Listeria monocytogenes in cantaloupes and RTE meat and Escherichia coli O104:H4 in alfalfa seeds and sprouts under different storage conditions by using specific lytic bacteriophage cocktails applied either free or immobilized. Bacteriophage cocktails were introduced into prototypes of packaging materials using different techniques: i) immobilizing on positively charged modified cellulose membranes, ii) impregnating paper with bacteriophage suspension, and iii) encapsulating in alginate beads followed by application of beads onto the paper. Phage-treated and non-treated samples were stored for various times and at temperatures of 4°C, 12°C or 25°C. In cantaloupe, when free phage cocktail was added, L. monocytogenes counts dropped below the detection limit of the plating technique (<1 log CFU/g) after 5 days of storage at both 4°C and 12°C. However, at 25°C, counts below the detection limit were observed after 3 and 6h and a 2-log CFU/g reduction in cell numbers was seen after 24h. For the immobilized Listeria phage cocktail, around 1-log CFU/g reduction in the Listeria count was observed by the end of the storage period for all tested storage temperatures. For the alfalfa seeds and sprouts, regardless of the type of phage application technique (spraying of free phage suspension, bringing in contact with bacteriophage-based materials (paper coated with encapsulated bacteriophage or impregnated with bacteriophage suspension)), the count of E. coli O104:H4 was below the detection limit (<1 log CFU/g) after 1h in seeds and about a 1-log cycle reduction in E. coli count was observed on the germinated sprouts by day 5. In ready-to-eat (RTE) meat, LISTEX™ P100, a commercial phage product, was able to significantly reduce the growth of L. monocytogenes at both storage temperatures, 4°C and 10°C, for 25 days regardless of bacteriophage application format (immobilized or non-immobilized (free)). In conclusion, the developed phage-based materials demonstrated significant antimicrobial effect, when applied to the artificially contaminated foods, and can be used as prototypes for developing bioactive antimicrobial packaging materials capable of enhancing the safety of fresh produce and RTE meat. Copyright © 2015 Elsevier B.V. All rights reserved.
The Solar System Origin Revisited
NASA Astrophysics Data System (ADS)
Johnson, Fred M.
2016-10-01
A novel theory will be presented based in part on astronomical observations, plasma physics experiments, principles of physics and forensic techniques. The new theory correctly predicts planetary distances with a 1% precision. It accounts for energy production mechanism inside all of the planets including our Earth. A log-log mass-luminosity plot of G2 class stars and solar system planets results in a straight line plot, whose slope implies that a fission rather than a proton-proton fusion energy production is operating. Furthermore, it is a confirmation that all our planets had originated from within our Sun. Other still-born planets continue to appear on the Sun's surface, they are mislabeled as sunspots.
Diamond knife-assisted deep anterior lamellar keratoplasty to manage keratoconus.
Vajpayee, Rasik B; Maharana, Prafulla K; Sharma, Namrata; Agarwal, Tushar; Jhanji, Vishal
2014-02-01
To evaluate the outcomes of a new surgical technique, diamond knife-assisted deep anterior lamellar keratoplasty (DALK), and compare its visual and refractive results with big-bubble DALK in cases of keratoconus. Tertiary eyecare hospital. Comparative case series. The visual and surgical outcomes of diamond knife-assisted DALK were compared with those of successful big-bubble DALK. Diamond knife-assisted DALK was performed in 19 eyes and big-bubble DALK, in 11 eyes. All surgeries were completed successfully. No intraoperative or postoperative complications occurred with diamond knife-assisted DALK. Six months after diamond knife-assisted DALK, the mean corrected distance visual acuity (CDVA) improved significantly from 1.87 logMAR ± 0.22 (SD) to 0.23 ± 0.06 logMAR, the mean keratometry improved from 65.99 ± 8.86 diopters (D) to 45.13 ± 1.16 D, and the mean keratometric cylinder improved from 7.99 ± 3.81 D to 2.87 ± 0.59 D (all P=.005). Postoperatively, the mean refractive astigmatism was 2.55 ± 0.49 D and the mean spherical equivalent was -1.97 ± 0.56 D. The mean logMAR CDVA (P = .06), postoperative keratometry (P=.64), refractive cylinder (P=.63), and endothelial cell loss (P=.11) were comparable between diamond knife-assisted DALK and big-bubble DALK. Diamond knife-assisted DALK was effective and predictable as a surgical technique for management of keratoconus cases. This technique has the potential to offer visual and refractive outcomes comparable to those of big-bubble DALK. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu
2014-03-01
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
NASA Astrophysics Data System (ADS)
Lo, Hung-Chieh; Chen, Po-Jui; Chou, Po-Yi; Hsu, Shih-Meng
2014-06-01
This paper presents an improved borehole prospecting methodology based on a combination of techniques in the hydrogeological characterization of fractured rock aquifers. The approach is demonstrated by on-site tests carried out in the Hoshe Experimental Forest site and the Tailuge National Park, Taiwan. Borehole televiewer logs are used to obtain fracture location and distribution along boreholes. The heat-pulse flow meter log is used to measure vertical velocity flow profiles which can be analyzed to estimate fracture transmissivity and to indicate hydraulic connectivity between fractures. Double-packer hydraulic tests are performed to determine the rock mass transmissivity. The computer program FLASH is used to analyze the data from the flowmeter logs. The FLASH program is confirmed as a useful tool which quantitatively predicts the fracture transmissivity in comparison to the hydraulic properties obtained from packer tests. The location of conductive fractures and their transmissivity is identified, after which the preferential flow paths through the fracture network are precisely delineated from a cross-borehole test. The results provide robust confirmation of the use of combined flowmeter and packer methods in the characterization of fractured-rock aquifers, particularly in reference to the investigation of groundwater resource and contaminant transport dynamics.
NASA Technical Reports Server (NTRS)
Murray, Alex; Eng, Bjorn; Leff, Craig; Schwarz, Arnold
1997-01-01
In the development environment for ASTER level II product generation system, techniques have been incorporated to allow automated information sharing among all system elements, and to enable the use of sound software engineering techniques in the scripting languages.
NASA Astrophysics Data System (ADS)
Deng, Chengxiang; Pan, Heping; Luo, Miao
2017-12-01
The Chinese Continental Scientific Drilling (CCSD) main hole is located in the Sulu ultrahigh-pressure metamorphic (UHPM) belt, providing significant opportunities for studying the metamorphic strata structure, kinetics process and tectonic evolution. Lithology identification is the primary and crucial stage for above geoscientific researches. To release the burden of log analyst and improve the efficiency of lithology interpretation, many algorithms have been developed to automate the process of lithology prediction. While traditional statistical techniques, such as discriminant analysis and K-nearest neighbors classifier, are incompetent in extracting nonlinear features of metamorphic rocks from complex geophysical log data; artificial intelligence algorithms are capable of solving nonlinear problems, but most of the algorithms suffer from tuning parameters to be global optimum to establish model rather than local optimum, and also encounter challenges in making the balance between training accuracy and generalization ability. Optimization methods have been applied extensively in the inversion of reservoir parameters of sedimentary formations using well logs. However, it is difficult to obtain accurate solution from the logging response equations of optimization method because of the strong overlapping of nonstationary log signals when applied in metamorphic formations. As oxide contents of each kinds of metamorphic rocks are relatively less overlapping, this study explores an approach, set in a metamorphic formation model and using the Broyden Fletcher Goldfarb Shanno (BFGS) optimization algorithm to identify lithology from oxide data. We first incorporate 11 geophysical logs and lab-collected geochemical data of 47 core samples to construct oxide profile of CCSD main hole by using backwards stepwise multiple regression method, which eliminates irrelevant input logs step by step for higher statistical significance and accuracy. Then we establish oxide response equations in accordance with the metamorphic formation model and employ BFGS algorithm to minimize the objective function. Finally, we identify lithology according to the composition content which accounts for the largest proportion. The results show that lithology identified by the method of this paper is consistent with core description. Moreover, this method demonstrates the benefits of using oxide content as an adhesive to connect logging data with lithology, can make the metamorphic formation model more understandable and accurate, and avoid selecting complex formation model and building nonlinear logging response equations.
Habib, I; Sampers, I; Uyttendaele, M; Berkvens, D; De Zutter, L
2008-02-01
In this work, we present an intra-laboratory study in order to estimate repeatability (r), reproducibility (R), and measurement uncertainty (U) associated with three media for Campylobacter enumeration, named, modified charcoal cefoperazone deoxycholate agar (mCCDA); Karmali agar; and CampyFood ID agar (CFA) a medium by Biomérieux SA. The study was performed at three levels: (1) pure bacterial cultures, using three Campylobacter strains; (2) artificially contaminated samples from three chicken meat matrixes (total n=30), whereby samples were spiked using two contamination levels; ca. 10(3)cfuCampylobacter/g, and ca. 10(4)cfuCampylobacter/g; and (3) pilot testing in naturally contaminated chicken meat samples (n=20). Results from pure culture experiment revealed that enumeration of Campylobacter colonies on Karmali and CFA media was more convenient in comparison with mCCDA using spread and spiral plating techniques. Based on artificially contaminated samples testing, values of repeatability (r) were comparable between the three media, and estimated as 0.15log(10)cfu/g for mCCDA, 0.14log(10)cfu/g for Karmali, and 0.18log(10)cfu/g for CFA. As well, reproducibility performance of the three plating media was comparable. General R values which can be used when testing chicken meat samples are; 0.28log(10), 0.32log(10), and 0.25log(10) for plating on mCCDA, Karmali agar, and CFA, respectively. Measurement uncertainty associated with mCCDA, Karmali agar, and CFA using spread plating, for combination of all meat matrixes, were +/-0.24log(10)cfu/g, +/-0.28log(10)cfu/g, and +/-0.22log(10)cfu/g, respectively. Higher uncertainty was associated with Karmali agar for Campylobacter enumeration in artificially inoculated minced meat (+/-0.48log(10)cfu/g). The general performance of CFA medium was comparable with mCCDA performance at the level of artificially contaminated samples. However, when tested at naturally contaminated samples, non-Campylobacter colonies gave similar deep red colour as that given by the typical Campylobacter growth on CFA. Such colonies were not easily distinguishable by naked eye. In general, the overall reproducibility, repeatability, and measurement uncertainty estimated by our study indicate that there are no major problems with the precision of the International Organization for Standardization (ISO) 10272-2:2006 protocol for Campylobacter enumeration using mCCDA medium.
Predicting Information Flows in Network Traffic.
ERIC Educational Resources Information Center
Hinich, Melvin J.; Molyneux, Robert E.
2003-01-01
Discusses information flow in networks and predicting network traffic and describes a study that uses time series analysis on a day's worth of Internet log data. Examines nonlinearity and traffic invariants, and suggests that prediction of network traffic may not be possible with current techniques. (Author/LRW)
South-East Asia's Trembling Rainforests.
ERIC Educational Resources Information Center
Laird, John
1991-01-01
This discussion focuses on potential solutions to the degradation of rainforests in Southeast Asia caused by indiscriminate logging, inappropriate road-construction techniques, forest fires, and the encroachment upon watersheds by both agricultural concerns and peasant farmers. Vignettes illustrate the impact of this degradation upon the animals,…
Mendenhall, Jeffrey; Meiler, Jens
2016-02-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.
Mendenhall, Jeffrey; Meiler, Jens
2016-01-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, R; Kamima, T; Tachibana, H
2016-06-15
Purpose: To investigate the effect of the trajectory files from linear accelerator for Clarkson-based independent dose verification in IMRT and VMAT plans. Methods: A CT-based independent dose verification software (Simple MU Analysis: SMU, Triangle Products, Japan) with a Clarksonbased algorithm was modified to calculate dose using the trajectory log files. Eclipse with the three techniques of step and shoot (SS), sliding window (SW) and Rapid Arc (RA) was used as treatment planning system (TPS). In this study, clinically approved IMRT and VMAT plans for prostate and head and neck (HN) at two institutions were retrospectively analyzed to assess the dosemore » deviation between DICOM-RT plan (PL) and trajectory log file (TJ). An additional analysis was performed to evaluate MLC error detection capability of SMU when the trajectory log files was modified by adding systematic errors (0.2, 0.5, 1.0 mm) and random errors (5, 10, 30 mm) to actual MLC position. Results: The dose deviations for prostate and HN in the two sites were 0.0% and 0.0% in SS, 0.1±0.0%, 0.1±0.1% in SW and 0.6±0.5%, 0.7±0.9% in RA, respectively. The MLC error detection capability shows the plans for HN IMRT were the most sensitive and 0.2 mm of systematic error affected 0.7% dose deviation on average. Effect of the MLC random error did not affect dose error. Conclusion: The use of trajectory log files including actual information of MLC location, gantry angle, etc should be more effective for an independent verification. The tolerance level for the secondary check using the trajectory file may be similar to that of the verification using DICOM-RT plan file. From the view of the resolution of MLC positional error detection, the secondary check could detect the MLC position error corresponding to the treatment sites and techniques. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
Allinson, Graeme; Zhang, Pei; Bui, AnhDuyen; Allinson, Mayumi; Rose, Gavin; Marshall, Stephen; Pettigrove, Vincent
2015-07-01
Samples of water and sediments were collected from 24 urban wetlands in Melbourne, Australia, in April 2010, and tested for more than 90 pesticides using a range of gas chromatographic (GC) and liquid chromatographic (LC) techniques, sample 'hormonal' activity using yeast-based recombinant receptor-reporter gene bioassays, and trace metals using spectroscopic techniques. At the time of sampling, there was almost no estrogenic activity in the water column. Twenty-three different pesticide residues were observed in one or more water samples from the 24 wetlands; chemicals observed at more than 40% of sites were simazine (100%), atrazine (79%), and metalaxyl and terbutryn (46%). Using the toxicity unit (TU) concept, less than 15% of the detected pesticides were considered to pose an individual, short-term risk to fish or zooplankton in the ponds and wetlands. However, one pesticide (fenvalerate) may have posed a possible short-term risk to fish (log10TUf > -3), and three pesticides (azoxystrobin, fenamiphos and fenvalerate) may have posed a risk to zooplankton (logTUzp between -2 and -3); all the photosystem II (PSII) inhibiting herbicides may have posed a risk to primary producers in the ponds and wetlands (log10TUap and/or log10TUalg > -3). The wetland sediments were contaminated with 16 different pesticides; no chemicals were observed at more than one third of sites, but based on frequency of detection and concentrations, bifenthrin (33%, maximum 59 μg/kg) is the priority insecticide of concern for the sediments studied. Five sites returned a TU greater than the possible effect threshold (i.e. log10TU > 1) as a result of bifenthrin contamination of their sediments. Most sediments did not exceed Australian sediment quality guideline levels for trace metals. However, more than half of the sites had threshold effect concentration quotients (TECQ) values >1 for Cu (58%), Pb (50%), Ni (67%) and Zn (63%), and 75% of sites had mean probable effect concentration quotients (PECQ) >0.2, suggesting that the collected sediments may have been having some impact on sediment-dwelling organisms.
Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka
2016-08-05
Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhang, Chenxi; Hu, Zhaochu; Zhang, Wen; Liu, Yongsheng; Zong, Keqing; Li, Ming; Chen, Haihong; Hu, Shenghong
2016-10-18
Sample preparation of whole-rock powders is the major limitation for their accurate and precise elemental analysis by laser ablation inductively-coupled plasma mass spectrometry (ICPMS). In this study, a green, efficient, and simplified fusion technique using a high energy infrared laser was developed for major and trace elemental analysis. Fusion takes only tens of milliseconds for each sample. Compared to the pressed pellet sample preparation, the analytical precision of the developed laser fusion technique is higher by an order of magnitude for most elements in granodiorite GSP-2. Analytical results obtained for five USGS reference materials (ranging from mafic to intermediate to felsic) using the laser fusion technique generally agree with recommended values with discrepancies of less than 10% for most elements. However, high losses (20-70%) of highly volatile elements (Zn and Pb) and the transition metal Cu are observed. The achieved precision is within 5% for major elements and within 15% for most trace elements. Direct laser fusion of rock powders is a green and notably simple method to obtain homogeneous samples, which will significantly accelerate the application of laser ablation ICPMS for whole-rock sample analysis.
Bujard, Alban; Sol, Marine; Carrupt, Pierre-Alain; Martel, Sophie
2014-10-15
The parallel artificial membrane permeability assay (PAMPA) is a high-throughput screening (HTS) method that is widely used to predict in vivo passive permeability through biological barriers, such as the skin, the blood brain barrier (BBB) and the gastrointestinal tract (GIT). The PAMPA technique has also been used to predict the dissociation constant (Kd) between a compound and human serum albumin (HSA) while disregarding passive permeability. Furthermore, the assay is based on the use of two separate 5-point kinetic experiments, which increases the analysis time. In the present study, we adapted the hexadecane membrane (HDM)-PAMPA assay to both predict passive gastrointestinal absorption via the permeability coefficient logPe value and determine the Kd. Two assays were performed: one in the presence and one in the absence of HSA in the acceptor compartment. In the absence of HSA, logPe values were determined after a 4-h incubation time, as originally described, but the dimethylsulfoxide (DMSO) percentage and pH were altered to be compatible with the protein. In parallel, a second PAMPA assay was performed in the presence of HSA during a 16-h incubation period. By adding HSA, a variation in the amount of compound crossing the membrane was observed compared to the permeability measured in the absence of HSA. The concentration of compound reaching the acceptor compartment in each case was used to determine both parameters (logPe and logKd) using numerical simulations, which highlighted the originality of this method because these calculations required only two endpoint measurements instead of a complete kinetic study. It should be noted that the amount of compound that reaches the acceptor compartment in the presence of HSA is modulated by complex dissociation in the receptor compartment. Only compounds that are moderately bound to albumin (-3
Influence of drilling operations on drilling mud gas monitoring during IODP Exp. 338 and 348
NASA Astrophysics Data System (ADS)
Hammerschmidt, Sebastian; Toczko, Sean; Kubo, Yusuke; Wiersberg, Thomas; Fuchida, Shigeshi; Kopf, Achim; Hirose, Takehiro; Saffer, Demian; Tobin, Harold; Expedition 348 Scientists, the
2014-05-01
The history of scientific ocean drilling has developed some new techniques and technologies for drilling science, dynamic positioning being one of the most famous. However, while industry has developed newer tools and techniques, only some of these have been used in scientific ocean drilling. The introduction of riser-drilling, which recirculates the drilling mud and returns to the platform solids and gases from the formation, to the International Ocean Drilling Program (IODP) through the launch of the Japan Agency of Marine Earth-Science and Technology (JAMSTEC) riser-drilling vessel D/V Chikyu, has made some of these techniques available to science. IODP Expedition 319 (NanTroSEIZE Stage 2: riser/riserless observatory) was the first such attempt, and among the tools and techniques used was drilling mud gas analysis. While industry regularly conducts drilling mud gas logging for safety concerns and reservoir evaluation, science is more interested in other components (e.g He, 222Rn) that are beyond the scope of typical mud logging services. Drilling mud gas logging simply examines the gases released into the drilling mud as part of the drilling process; the bit breaks and grinds the formation, releasing any trapped gases. These then circulate within the "closed circuit" mud-flow back to the drilling rig, where a degasser extracts these gases and passes them on to a dedicated mud gas logging unit. The unit contains gas chromatographs, mass spectrometers, spectral analyzers, radon gas analyzers, and a methane carbon isotope analyzer. Data are collected and stored in a database, together with several drilling parameters (rate of penetration, mud density, etc.). This initial attempt was further refined during IODP Expeditions 337 (Deep Coalbed Biosphere off Shimokita), 338 (NanTroSEIZE Stage 3: NanTroSEIZE Plate Boundary Deep Riser 2) and finally 348 (NanTroSEIZE Stage 3: NanTroSEIZE Plate Boundary Deep Riser 3). Although still in its development stage for scientific application, this technique can provide a valuable suite of measurements to complement more traditional IODP shipboard measurements. Here we present unpublished data from IODP Expeditions 338 and 348, penetrating the Nankai Accretionary wedge to 3058.5 meters below seafloor. Increasing mud density decreased degasser efficiency, especially for higher hydrocarbons. Blurring of the relative variations in total gas by depth was observed, and confirmed with comparison to headspace gas concentrations from the cored interval. Theoretically, overpressured zones in the formation can be identified through C2/C3 ratios, but these ratios are highly affected by changing drilling parameters. Proper mud gas evaluations will need to carefully consider the effects of variable drilling parameters when designing experiments and interpreting the data.
Ariyama, Kaoru; Horita, Hiroshi; Yasui, Akemi
2004-09-22
The composition of concentration ratios of 19 inorganic elements to Mg (hereinafter referred to as 19-element/Mg composition) was applied to chemometric techniques to determine the geographic origin (Japan or China) of Welsh onions (Allium fistulosum L.). Using a composition of element ratios has the advantage of simplified sample preparation, and it was possible to determine the geographic origin of a Welsh onion within 2 days. The classical technique based on 20 element concentrations was also used along with the new simpler one based on 19 elements/Mg in order to validate the new technique. Twenty elements, Na, P, K, Ca, Mg, Mn, Fe, Cu, Zn, Sr, Ba, Co, Ni, Rb, Mo, Cd, Cs, La, Ce, and Tl, in 244 Welsh onion samples were analyzed by flame atomic absorption spectroscopy, inductively coupled plasma atomic emission spectrometry, and inductively coupled plasma mass spectrometry. Linear discriminant analysis (LDA) on 20-element concentrations and 19-element/Mg composition was applied to these analytical data, and soft independent modeling of class analogy (SIMCA) on 19-element/Mg composition was applied to these analytical data. The results showed that techniques based on 19-element/Mg composition were effective. LDA, based on 19-element/Mg composition for classification of samples from Japan and from Shandong, Shanghai, and Fujian in China, classified 101 samples used for modeling 97% correctly and predicted another 119 samples excluding 24 nonauthentic samples 93% correctly. In discriminations by 10 times of SIMCA based on 19-element/Mg composition modeled using 101 samples, 220 samples from known production areas including samples used for modeling and excluding 24 nonauthentic samples were predicted 92% correctly.
Improvement Technology Classification and Composition in Multimodel Environments
2008-03-01
ISO 15504, ISO 12207 , and others COBIT, ITIL, SOX, and...Practice Elements CMMI PAs and PLA ISO 15504 and ISO 12207 COBIT EFQM ISO 9001 Improvement Method Elements Change management techniques: IDEAL and Six...others EFQM and others ISO 9001, ISO 61508, ISO 16949, and others Improvement Method Elements Change management techniques: IDEAL, Six Sigma,
NASA Astrophysics Data System (ADS)
Matar, Thiombane; Vivo Benedetto, De; Albanese, Stefano; Martín-Fernández, Josep-Antoni; Lima, Annamaria; Doherty, Angela
2017-04-01
The Sarno River Basin (south-west Italy), nestled between the Somma-Vesuvius volcanic complex and the limestone formations of the Campania-Apennine Chain, is one of the most polluted river basins in Europe due to a high rate of industrialization and intensive agriculture. Water from the Sarno River, which is heavily contaminated by the discharge of human and industrial waste, is partially used for irrigation on the agricultural fields surrounding it. We apply compositional data analysis on 319 samples collected during two field campaigns along the river course, and throughout the basin, to determine the level and potential origin (anthropogenic and/or geogenic) of the potentially toxic elements (PTEs). The concentrations of 53 elements determined by ICP-MS, and were subsequently log-transformed. Using a clr-biplot and principal factor analysis, the variability and the correlations between a subset of extracted variables (26 elements) were identified. Using both normalized raw data and clr-transformed coordinates, factor association interpolated maps were generated to better visualize the distribution and potential sources of the PTEs in the Sarno Basin. The underlying geology substrata appear to be associated with raised of levels of Na, K, P, Rb, Ba, V, Co, B, Zr, and Li, due to the presence of pyroclastic rocks from Mt. Somma-Vesuvius. Similarly, elevated Pb, Zn, Cd, and Hg concentrations are most likely related to both geological and anthropogenic sources, the underlying volcanic rocks and contamination from fossil fuel combustion associated with urban centers. Interpolated factors score maps and clr-biplot indicate a clear correlation between Ni and Cr in samples taken along the Sarno River, and Ca and Mg near the Solofra district. After considering nearby anthropogenic sources, the Ni and Cr are PTEs from the Solofra tannery industry, while Ca and Mg correlate to the underlying limestone-rich soils of the area. This study shows the applicability of the compositional data analysis transformations, which relates perfectly relationships and dependencies between elements which can be lost when univariate and classical multivariate analyses are employed on normal data. Keywords: Sarno basin, PTEs, compositional data analysis, centered-log Transformation (clr), Biplot, Factor analysis, ArcGIS
Senathirajah, Yalini; Kaufman, David; Bakken, Suzanne
2016-01-01
Challenges in the design of electronic health records (EHRs) include designing usable systems that must meet the complex, rapidly changing, and high-stakes information needs of clinicians. The ability to move and assemble elements together on the same page has significant human-computer interaction (HCI) and efficiency advantages, and can mitigate the problems of negotiating multiple fixed screens and the associated cognitive burdens. We compare MedWISE-a novel EHR that supports user-composable displays-with a conventional EHR in terms of the number of repeat views of data elements for patient case appraisal. The study used mixed-methods for examination of clinical data viewing in four patient cases. The study compared use of an experimental user-composable EHR with use of a conventional EHR, for case appraisal. Eleven clinicians used a user-composable EHR in a case appraisal task in the laboratory setting. This was compared with log file analysis of the same patient cases in the conventional EHR. We investigated the number of repeat views of the same clinical information during a session and across these two contexts, and compared them using Fisher's exact test. There was a significant difference (p<.0001) in proportion of cases with repeat data element viewing between the user-composable EHR (14.6 percent) and conventional EHR (72.6 percent). Users of conventional EHRs repeatedly viewed the same information elements in the same session, as revealed by log files. Our findings are consistent with the hypothesis that conventional systems require that the user view many screens and remember information between screens, causing the user to forget information and to have to access the information a second time. Other mechanisms (such as reduction in navigation over a population of users due to interface sharing, and information selection) may also contribute to increased efficiency in the experimental system. Systems that allow a composable approach that enables the user to gather together on the same screen any desired information elements may confer cognitive support benefits that can increase productive use of systems by reducing fragmented information. By reducing cognitive overload, it can also enhance the user experience.
Department of Defense Access to Intellectual Property for Weapon Systems Sustainment
2017-05-01
and acquiring technical data rights … The cost benefit analysis of including a priced contract option for the future delivery of technical data...entail in terms of cost and benefits , while one of the activities to be finalized is the contract-specific technical data elements.66...Virginia 22311-1882 May 2017 Approved for public release; distribution is unlimited. IDA Paper P-8266 Log: H 17-000030 About This Publication This
Proceedings of the symposium on the ecology and management of dead wood in western forests
William F. Laudenslayer; Patrick J. Shea; Bradley E. Valentine; C. Phillip Weatherspoon; Thomas E. Lisle
2002-01-01
Dead trees, both snags (standing dead trees) and logs (downed dead trees), are critical elements of healthy and productive forests. The âSymposium on the Ecology and Management of Dead Wood in Western Forestsâ was convened to bring together forest researchers and managers to share the current state of knowledge relative to the values and interactions of dead wood to...
Control of Structure in Turbulent Flows: Bifurcating and Blooming Jets.
1987-10-10
injected through computational boundaries. (2) to satisfy no- slip boundary conditions or (3) during ’ grid " refinement when one element may be split...use of fast Poisson solvers on a mesh of M grid points, the operation count for this step can approach 0(M log M). Additional required steps are (1...consider s- three-dimensionai perturbations to the uart vortices. The linear stability calculations ot Pierrehumbert & Widnadl [101 are available for
Blaya, Josefa; Lloret, Eva; Santísima-Trinidad, Ana B; Ros, Margarita; Pascual, Jose A
2016-04-01
Currently, real-time polymerase chain reaction (qPCR) is the technique most often used to quantify pathogen presence. Digital PCR (dPCR) is a new technique with the potential to have a substantial impact on plant pathology research owing to its reproducibility, sensitivity and low susceptibility to inhibitors. In this study, we evaluated the feasibility of using dPCR and qPCR to quantify Phytophthora nicotianae in several background matrices, including host tissues (stems and roots) and soil samples. In spite of the low dynamic range of dPCR (3 logs compared with 7 logs for qPCR), this technique proved to have very high precision applicable at very low copy numbers. The dPCR was able to detect accurately the pathogen in all type of samples in a broad concentration range. Moreover, dPCR seems to be less susceptible to inhibitors than qPCR in plant samples. Linear regression analysis showed a high correlation between the results obtained with the two techniques in soil, stem and root samples, with R(2) = 0.873, 0.999 and 0.995 respectively. These results suggest that dPCR is a promising alternative for quantifying soil-borne pathogens in environmental samples, even in early stages of the disease. © 2015 Society of Chemical Industry.
Effects of In and Ni Addition on Microstructure of Sn-58Bi Solder Joint
NASA Astrophysics Data System (ADS)
Mokhtari, Omid; Nishikawa, Hiroshi
2014-11-01
In this study, the effect of adding 0.5 wt.% and 1 wt.% In and Ni to Sn-58Bi solder on intermetallic compound (IMC) layers at the interface and the microstructure of the solder alloys were investigated during reflow and thermal aging by scanning electron microscopy and electron probe micro-analysis. The results showed that the addition of minor elements was not effective in suppressing the IMC growth during the reflow; however, the addition of 0.5 wt.% In and Ni was effective in suppressing the IMC layer growth during thermal aging. The thickening kinetics of the total IMC layer was analyzed by plotting the mean thickness versus the aging time on log-log coordinates, and the results showed the transition point from grain boundary diffusion control to a volume diffusion control mechanism. The results also showed that the minor addition of In can significantly suppress the coarsening of the Bi phase.
Karaton, Muhammet
2014-01-01
A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Madhavi Z.; Glasgow, David C.; Tschaplinski, Timothy J.
The black cottonwood poplar (Populus trichocarpa) leaf ionome (inorganic trace elements and mineral nutrients) is an important aspect for determining the physiological and developmental processes contributing to biomass production. A number of techniques are used to measure the ionome, yet characterizing the leaf spatial heterogeneity remains a challenge, especially in solid samples. Laser-induced breakdown spectroscopy (LIBS) has been used to determine the elemental composition of leaves and is able to raster across solid matrixes at 10 μm resolution. Here, we evaluate the use of LIBS for solid sample leaf elemental characterization in relation to neutron activation. In fact, neutron activationmore » analysis is a laboratory-based technique which is used by the National Institute of Standards and Technology (NIST) to certify trace elements in candidate reference materials including plant leaf matrices. Introduction to the techniques used in this research has been presented in this manuscript. Neutron activation analysis (NAA) data has been correlated to the LIBS spectra to achieve quantification of the elements or ions present within poplar leaves. The regression coefficients of calibration and validation using multivariate analysis (MVA) methodology for six out of seven elements have been determined and vary between 0.810 and 0.998. LIBS and NAA data has been presented for the elements such as, calcium, magnesium, manganese, aluminum, copper, and potassium. Chlorine was also detected but it did not show good correlation between the LIBS and NAA techniques. This research shows that LIBS can be used as a fast, high-spatial resolution technique to quantify elements as part of large-scale field phenotyping projects.« less
NASA Astrophysics Data System (ADS)
Martin, Madhavi Z.; Glasgow, David C.; Tschaplinski, Timothy J.; Tuskan, Gerald A.; Gunter, Lee E.; Engle, Nancy L.; Wymore, Ann M.; Weston, David J.
2017-12-01
The black cottonwood poplar (Populus trichocarpa) leaf ionome (inorganic trace elements and mineral nutrients) is an important aspect for determining the physiological and developmental processes contributing to biomass production. A number of techniques are used to measure the ionome, yet characterizing the leaf spatial heterogeneity remains a challenge, especially in solid samples. Laser-induced breakdown spectroscopy (LIBS) has been used to determine the elemental composition of leaves and is able to raster across solid matrixes at 10 μm resolution. Here, we evaluate the use of LIBS for solid sample leaf elemental characterization in relation to neutron activation. In fact, neutron activation analysis is a laboratory-based technique which is used by the National Institute of Standards and Technology (NIST) to certify trace elements in candidate reference materials including plant leaf matrices. Introduction to the techniques used in this research has been presented in this manuscript. Neutron activation analysis (NAA) data has been correlated to the LIBS spectra to achieve quantification of the elements or ions present within poplar leaves. The regression coefficients of calibration and validation using multivariate analysis (MVA) methodology for six out of seven elements have been determined and vary between 0.810 and 0.998. LIBS and NAA data has been presented for the elements such as, calcium, magnesium, manganese, aluminum, copper, and potassium. Chlorine was also detected but it did not show good correlation between the LIBS and NAA techniques. This research shows that LIBS can be used as a fast, high-spatial resolution technique to quantify elements as part of large-scale field phenotyping projects.
Martin, Madhavi Z.; Glasgow, David C.; Tschaplinski, Timothy J.; ...
2017-10-17
The black cottonwood poplar (Populus trichocarpa) leaf ionome (inorganic trace elements and mineral nutrients) is an important aspect for determining the physiological and developmental processes contributing to biomass production. A number of techniques are used to measure the ionome, yet characterizing the leaf spatial heterogeneity remains a challenge, especially in solid samples. Laser-induced breakdown spectroscopy (LIBS) has been used to determine the elemental composition of leaves and is able to raster across solid matrixes at 10 μm resolution. Here, we evaluate the use of LIBS for solid sample leaf elemental characterization in relation to neutron activation. In fact, neutron activationmore » analysis is a laboratory-based technique which is used by the National Institute of Standards and Technology (NIST) to certify trace elements in candidate reference materials including plant leaf matrices. Introduction to the techniques used in this research has been presented in this manuscript. Neutron activation analysis (NAA) data has been correlated to the LIBS spectra to achieve quantification of the elements or ions present within poplar leaves. The regression coefficients of calibration and validation using multivariate analysis (MVA) methodology for six out of seven elements have been determined and vary between 0.810 and 0.998. LIBS and NAA data has been presented for the elements such as, calcium, magnesium, manganese, aluminum, copper, and potassium. Chlorine was also detected but it did not show good correlation between the LIBS and NAA techniques. This research shows that LIBS can be used as a fast, high-spatial resolution technique to quantify elements as part of large-scale field phenotyping projects.« less
Analysis of three tests of the unconfined aquifer in southern Nassau County, Long Island, New York
Lindner, J.B.; Reilly, T.E.
1982-01-01
Drawdown and recovery data from three 2-day aquifer tests (OF) the unconfined (water-table) aquifer in southern Nassau County, N.Y., during the fall of 1979, were analyzed. Several simple analytical solutions, a typecurve-matching procedure, and a Galerkin finite-element radial-flow model were used to determine hydraulic conductivity, ratio of horizontal to vertical hydraulic conductivity, and specific yield. Results of the curve-matching procedure covered a broad range of values that could be narrowed through consideration of data from other sources such as published reports, drillers ' logs, or values determined by analytical solutions. Analysis by the radial-flow model was preferred because it allows for vertical variability in aquifer properties and solves the system for all observation points simultaneously, whereas the other techniques treat the aquifer as homogeneous and must treat each observation well separately. All methods produced fairly consistent results. The ranges of aquifer values at the three sites were: horizontal hydraulic conductivity, 140 to 380 feet per day; transmissivity 11,200 to 17,100 feet squared per day; ratio of horizontal to vertical hydraulic conductivity 2.4:1 to 7:1, and specific yield , 0.13 to 0.23. (USGS)
Perspectives for on-line analysis of bauxite by neutron irradiation
NASA Astrophysics Data System (ADS)
Beurton, Gabriel; Ledru, Bertrand; Letourneur, Philippe
1995-03-01
The interest in bauxite as a major source of alumina results in a strong demand for on-line instrumentation suitable for sorting, blending, and processing operations at the bauxite mine and for monitoring instrumentation in the Bayer process. The results of laboratory experiments based on neutron interactions with bauxite are described. The technique was chosen in order to overcome the problem of spatial heterogeneity in bulk mineral analysis. The evaluated elements contributed to approximately 99.5% of the sample weight. In addition, the measurements provide valuable information on physical parameters such as density, hygrometry, and material flow. Using a pulsed generator, the analysis system offers potential for on-line measurements (borehole logging or conveyor belt). An overall description of the experimental set-up is given. The experimental data include measurements of natural radioactivity, delayed radioactivity induced by activation, and prompt gamma rays following neutron reaction. In situ applications of neutron interactions provide continuous analysis and produce results which are more statistically significant. The key factors contributing to advances in industrial applications are the development of high count rate gamma spectroscopy and computational tools to design measurement systems and interpret their results.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
Fuls, Janice L.; Rodgers, Nancy D.; Fischler, George E.; Howard, Jeanne M.; Patel, Monica; Weidner, Patrick L.; Duran, Melani H.
2008-01-01
Antimicrobial hand soaps provide a greater bacterial reduction than nonantimicrobial soaps. However, the link between greater bacterial reduction and a reduction of disease has not been definitively demonstrated. Confounding factors, such as compliance, soap volume, and wash time, may all influence the outcomes of studies. The aim of this work was to examine the effects of wash time and soap volume on the relative activities and the subsequent transfer of bacteria to inanimate objects for antimicrobial and nonantimicrobial soaps. Increasing the wash time from 15 to 30 seconds increased reduction of Shigella flexneri from 2.90 to 3.33 log10 counts (P = 0.086) for the antimicrobial soap, while nonantimicrobial soap achieved reductions of 1.72 and 1.67 log10 counts (P > 0.6). Increasing soap volume increased bacterial reductions for both the antimicrobial and the nonantimicrobial soaps. When the soap volume was normalized based on weight (∼3 g), nonantimicrobial soap reduced Serratia marcescens by 1.08 log10 counts, compared to the 3.83-log10 reduction caused by the antimicrobial soap (P < 0.001). The transfer of Escherichia coli to plastic balls following a 15-second hand wash with antimicrobial soap resulted in a bacterial recovery of 2.49 log10 counts, compared to the 4.22-log10 (P < 0.001) bacterial recovery on balls handled by hands washed with nonantimicrobial soap. This indicates that nonantimicrobial soap was less active and that the effectiveness of antimicrobial soaps can be improved with longer wash time and greater soap volume. The transfer of bacteria to objects was significantly reduced due to greater reduction in bacteria following the use of antimicrobial soap. PMID:18441107
Fuls, Janice L; Rodgers, Nancy D; Fischler, George E; Howard, Jeanne M; Patel, Monica; Weidner, Patrick L; Duran, Melani H
2008-06-01
Antimicrobial hand soaps provide a greater bacterial reduction than nonantimicrobial soaps. However, the link between greater bacterial reduction and a reduction of disease has not been definitively demonstrated. Confounding factors, such as compliance, soap volume, and wash time, may all influence the outcomes of studies. The aim of this work was to examine the effects of wash time and soap volume on the relative activities and the subsequent transfer of bacteria to inanimate objects for antimicrobial and nonantimicrobial soaps. Increasing the wash time from 15 to 30 seconds increased reduction of Shigella flexneri from 2.90 to 3.33 log(10) counts (P = 0.086) for the antimicrobial soap, while nonantimicrobial soap achieved reductions of 1.72 and 1.67 log(10) counts (P > 0.6). Increasing soap volume increased bacterial reductions for both the antimicrobial and the nonantimicrobial soaps. When the soap volume was normalized based on weight (approximately 3 g), nonantimicrobial soap reduced Serratia marcescens by 1.08 log(10) counts, compared to the 3.83-log(10) reduction caused by the antimicrobial soap (P < 0.001). The transfer of Escherichia coli to plastic balls following a 15-second hand wash with antimicrobial soap resulted in a bacterial recovery of 2.49 log(10) counts, compared to the 4.22-log(10) (P < 0.001) bacterial recovery on balls handled by hands washed with nonantimicrobial soap. This indicates that nonantimicrobial soap was less active and that the effectiveness of antimicrobial soaps can be improved with longer wash time and greater soap volume. The transfer of bacteria to objects was significantly reduced due to greater reduction in bacteria following the use of antimicrobial soap.
Yoo, Sungyul; Ghafoor, Kashif; Kim, Jeong Un; Kim, Sanghun; Jung, Bora; Lee, Dong-Un; Park, Jiyong
2015-06-01
Nonpasteurized orange juice is manufactured by squeezing juice from fruit without peel removal. Fruit surfaces may carry pathogenic microorganisms that can contaminate squeezed juice. Titanium dioxide-UVC photocatalysis (TUVP), a nonthermal technique capable of microbial inactivation via generation of hydroxyl radicals, was used to decontaminate orange surfaces. Levels of spot-inoculated Escherichia coli O157:H7 (initial level of 7.0 log CFU/cm(2)) on oranges (12 cm(2)) were reduced by 4.3 log CFU/ml when treated with TUVP (17.2 mW/cm(2)). Reductions of 1.5, 3.9, and 3.6 log CFU/ml were achieved using tap water, chlorine (200 ppm), and UVC alone (23.7 mW/cm(2)), respectively. E. coli O157:H7 in juice from TUVP (17.2 mW/cm(2))-treated oranges was reduced by 1.7 log CFU/ml. After orange juice was treated with high hydrostatic pressure (HHP) at 400 MPa for 1 min without any prior fruit surface disinfection, the level of E. coli O157:H7 was reduced by 2.4 log CFU/ml. However, the E. coli O157:H7 level in juice was reduced by 4.7 log CFU/ml (to lower than the detection limit) when TUVP treatment of oranges was followed by HHP treatment of juice, indicating a synergistic inactivation effect. The inactivation kinetics of E. coli O157:H7 on orange surfaces followed a biphasic model. HHP treatment did not affect the pH, °Brix, or color of juice. However, the ascorbic acid concentration and pectinmethylesterase activity were reduced by 35.1 and 34.7%, respectively.
Levin, Iris I.; Zonana, David M.; Burt, John M.; Safran, Rebecca J.
2015-01-01
Proximity logging is a new tool for understanding social behavior as it allows for accurate quantification of social networks. We report results from field calibration and deployment tests of miniaturized proximity tags (Encounternet), digital transceivers that log encounters between tagged individuals. We examined radio signal behavior in relation to tag attachment (tag, tag on bird, tag on saline-filled balloon) to understand how radio signal strength is affected by the tag mounting technique used for calibration tests. We investigated inter-tag and inter-receiver station variability, and in each calibration test we accounted for the effects of antennae orientation. Additionally, we used data from a live deployment on breeding barn swallows (Hirundo rustica erythrogaster) to analyze the quality of the logs, including reciprocal agreement in dyadic logs. We evaluated the impact (in terms of mass changes) of tag attachment on the birds. We were able to statistically distinguish between RSSI values associated with different close-proximity (<5m) tag-tag distances regardless of antennae orientation. Inter-tag variability was low, but we did find significant inter-receiver station variability. Reciprocal agreement of dyadic logs was high and social networks were constructed from proximity tag logs based on two different RSSI thresholds. There was no evidence of significant mass loss in the time birds were wearing tags. We conclude that proximity loggers are accurate and effective for quantifying social behavior. However, because RSSI and distance cannot be perfectly resolved, data from proximity loggers are most appropriate for comparing networks based on specific RSSI thresholds. The Encounternet system is flexible and customizable, and tags are now light enough for use on small animals (<50g). PMID:26348329
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Image processing system for the measurement of timber truck loads
NASA Astrophysics Data System (ADS)
Carvalho, Fernando D.; Correia, Bento A. B.; Davies, Roger; Rodrigues, Fernando C.; Freitas, Jose C. A.
1993-01-01
The paper industry uses wood as its raw material. To know the quantity of wood in the pile of sawn tree trunks, every truck load entering the plant is measured to determine its volume. The objective of this procedure is to know the solid volume of wood stocked in the plant. Weighing the tree trunks has its own problems, due to their high capacity for absorbing water. Image processing techniques were used to evaluate the volume of a truck load of logs of wood. The system is based on a PC equipped with an image processing board using data flow processors. Three cameras allow image acquisition of the sides and rear of the truck. The lateral images contain information about the sectional area of the logs, and the rear image contains information about the length of the logs. The machine vision system and the implemented algorithms are described. The results being obtained with the industrial prototype that is now installed in a paper mill are also presented.
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
Gavino, V C; Milo, G E; Cornwell, D G
1982-03-01
Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.
IMPLEMENTING A NOVEL CYCLIC CO2 FLOOD IN PALEOZOIC REEFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
James R. Wood; W. Quinlan; A. Wylie
2003-07-01
Recycled CO2 will be used in this demonstration project to produce bypassed oil from the Silurian Charlton 6 pinnacle reef (Otsego County) in the Michigan Basin. Contract negotiations by our industry partner to gain access to this CO2 that would otherwise be vented to the atmosphere are near completion. A new method of subsurface characterization, log curve amplitude slicing, is being used to map facies distributions and reservoir properties in two reefs, the Belle River Mills and Chester 18 Fields. The Belle River Mills and Chester18 fields are being used as typefields because they have excellent log-curve and core datamore » coverage. Amplitude slicing of the normalized gamma ray curves is showing trends that may indicate significant heterogeneity and compartmentalization in these reservoirs. Digital and hard copy data continues to be compiled for the Niagaran reefs in the Michigan Basin. Technology transfer took place through technical presentations regarding the log curve amplitude slicing technique and a booth at the Midwest PTTC meeting.« less
Taslimitehrani, Vahid; Dong, Guozhu; Pereira, Naveen L; Panahiazar, Maryam; Pathak, Jyotishman
2016-04-01
Computerized survival prediction in healthcare identifying the risk of disease mortality, helps healthcare providers to effectively manage their patients by providing appropriate treatment options. In this study, we propose to apply a classification algorithm, Contrast Pattern Aided Logistic Regression (CPXR(Log)) with the probabilistic loss function, to develop and validate prognostic risk models to predict 1, 2, and 5year survival in heart failure (HF) using data from electronic health records (EHRs) at Mayo Clinic. The CPXR(Log) constructs a pattern aided logistic regression model defined by several patterns and corresponding local logistic regression models. One of the models generated by CPXR(Log) achieved an AUC and accuracy of 0.94 and 0.91, respectively, and significantly outperformed prognostic models reported in prior studies. Data extracted from EHRs allowed incorporation of patient co-morbidities into our models which helped improve the performance of the CPXR(Log) models (15.9% AUC improvement), although did not improve the accuracy of the models built by other classifiers. We also propose a probabilistic loss function to determine the large error and small error instances. The new loss function used in the algorithm outperforms other functions used in the previous studies by 1% improvement in the AUC. This study revealed that using EHR data to build prediction models can be very challenging using existing classification methods due to the high dimensionality and complexity of EHR data. The risk models developed by CPXR(Log) also reveal that HF is a highly heterogeneous disease, i.e., different subgroups of HF patients require different types of considerations with their diagnosis and treatment. Our risk models provided two valuable insights for application of predictive modeling techniques in biomedicine: Logistic risk models often make systematic prediction errors, and it is prudent to use subgroup based prediction models such as those given by CPXR(Log) when investigating heterogeneous diseases. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dahing, Lahasen@Normanshah; Yahya, Redzuan; Yahya, Roslan; Hassan, Hearie
2014-09-01
In this study, principle of prompt gamma neutron activation analysis has been used as a technique to determine the elements in the sample. The system consists of collimated isotopic neutron source, Cf-252 with HPGe detector and Multichannel Analysis (MCA). Concrete with size of 10×10×10 cm3 and 15×15×15 cm3 were analysed as sample. When neutrons enter and interact with elements in the concrete, the neutron capture reaction will occur and produce characteristic prompt gamma ray of the elements. The preliminary result of this study demonstrate the major element in the concrete was determined such as Si, Mg, Ca, Al, Fe and H as well as others element, such as Cl by analysis the gamma ray lines respectively. The results obtained were compared with NAA and XRF techniques as a part of reference and validation. The potential and the capability of neutron induced prompt gamma as tool for multi elemental analysis qualitatively to identify the elements present in the concrete sample discussed.
The Plant Ionome Revisited by the Nutrient Balance Concept
Parent, Serge-Étienne; Parent, Léon Etienne; Egozcue, Juan José; Rozane, Danilo-Eduardo; Hernandes, Amanda; Lapointe, Line; Hébert-Gentile, Valérie; Naess, Kristine; Marchand, Sébastien; Lafond, Jean; Mattos, Dirceu; Barlow, Philip; Natale, William
2013-01-01
Tissue analysis is commonly used in ecology and agronomy to portray plant nutrient signatures. Nutrient concentration data, or ionomes, belong to the compositional data class, i.e., multivariate data that are proportions of some whole, hence carrying important numerical properties. Statistics computed across raw or ordinary log-transformed nutrient data are intrinsically biased, hence possibly leading to wrong inferences. Our objective was to present a sound and robust approach based on a novel nutrient balance concept to classify plant ionomes. We analyzed leaf N, P, K, Ca, and Mg of two wild and six domesticated fruit species from Canada, Brazil, and New Zealand sampled during reproductive stages. Nutrient concentrations were (1) analyzed without transformation, (2) ordinary log-transformed as commonly but incorrectly applied in practice, (3) additive log-ratio (alr) transformed as surrogate to stoichiometric rules, and (4) converted to isometric log-ratios (ilr) arranged as sound nutrient balance variables. Raw concentration and ordinary log transformation both led to biased multivariate analysis due to redundancy between interacting nutrients. The alr- and ilr-transformed data provided unbiased discriminant analyses of plant ionomes, where wild and domesticated species formed distinct groups and the ionomes of species and cultivars were differentiated without numerical bias. The ilr nutrient balance concept is preferable to alr, because the ilr technique projects the most important interactions between nutrients into a convenient Euclidean space. This novel numerical approach allows rectifying historical biases and supervising phenotypic plasticity in plant nutrition studies. PMID:23526060
An orientation soil survey at the Pebble Cu-Au-Mo porphyry deposit, Alaska
Smith, Steven M.; Eppinger, Robert G.; Fey, David L.; Kelley, Karen D.; Giles, S.A.
2009-01-01
Soil samples were collected in 2007 and 2008 along three traverses across the giant Pebble Cu-Au-Mo porphyry deposit. Within each soil pit, four subsamples were collected following recommended protocols for each of ten commonly-used and proprietary leach/digestion techniques. The significance of geochemical patterns generated by these techniques was classified by visual inspection of plots showing individual element concentration by each analytical method along the 2007 traverse. A simple matrix by element versus method, populated with a value based on the significance classification, provides a method for ranking the utility of methods and elements at this deposit. The interpretation of a complex multi-element dataset derived from multiple analytical techniques is challenging. An example of vanadium results from a single leach technique is used to illustrate the several possible interpretations of the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravelo Arias, S. I.; Ramírez Muñoz, D.; Cardoso, S.
2015-06-15
The work shows a measurement technique to obtain the correct value of the four elements in a resistive Wheatstone bridge without the need to separate the physical connections existing between them. Two electronic solutions are presented, based on a source-and-measure unit and using discrete electronic components. The proposed technique brings the possibility to know the mismatching or the tolerance between the bridge resistive elements and then to pass or reject it in terms of its related common-mode rejection. Experimental results were taken in various Wheatstone resistive bridges (discrete and magnetoresistive integrated bridges) validating the proposed measurement technique specially when themore » bridge is micro-fabricated and there is no physical way to separate one resistive element from the others.« less
Using elements of hypnosis prior to or during pediatric dental treatment.
Peretz, Benjamin; Bercovich, Roly; Blumer, Sigalit
2013-01-01
Most dental practitioners are familiar with pediatric patients expressing dental fear or anxiety. Occasionally, the dentist may encounter a situation where all behavioral techniques fail, while, for some reason, premedication or general anesthesia are contraindicated or rejected by the patient or his/her parents and a different approach is required. Hypnosis may solve the problem in some cases. The purpose of this study was to review the literature about techniques that use elements of hypnosis and hypnotic techniques prior to or during pediatric dental treatment. There is a limited amount of literature regarding the use of hypnosis and hypnotic elements in pediatric dentistry. Induction techniques, reframing, distraction, imagery suggestions, and hypnosis are identified, although mostly anecdotally, while there are very few structured controlled studies. Nevertheless, the advantages of using hypnotic elements and hypnosis in pediatric dentistry are evident.
NASA Astrophysics Data System (ADS)
Soloveichik, Yury G.; Persova, Marina G.; Domnikov, Petr A.; Koshkina, Yulia I.; Vagin, Denis V.
2018-03-01
We propose an approach to solving multisource induction logging problems in multidimensional media. According to the type of induction logging tools, the measurements are performed in the frequency range of 10 kHz to 14 MHz, transmitter-receiver offsets vary in the range of 0.5-8 m or more, and the trajectory length is up to 1 km. For calculating the total field, the primary-secondary field approach is used. The secondary field is calculated with the use of the finite-element method (FEM), irregular non-conforming meshes with local refinements and a direct solver. The approach to constructing basis functions with the continuous tangential components (from Hcurl(Ω)) on the non-conforming meshes from the standard shape vector functions is developed. On the basis of this method, the algorithm of generating global matrices and a vector of the finite-element equation system is proposed. We also propose the method of grouping the logging tool positions, which makes it possible to significantly increase the computational effectiveness. This is achieved due to the compromise between the possibility of using the 1-D background medium, which is very similar to the investigated multidimensional medium for a small group, and the decrease in the number of the finite-element matrix factorizations with the increasing number of tool positions in one group. For calculating the primary field, we propose the method based on the use of FEM. This method is highly effective when the 1-D field is required to be calculated at a great number of points. The use of this method significantly increases the effectiveness of the primary-secondary field approach. The proposed approach makes it possible to perform modelling both in the 2.5-D case (i.e. without taking into account a borehole and/or invasion zone effect) and the 3-D case (i.e. for models with a borehole and invasion zone). The accuracy of numerical results obtained with the use of the proposed approach is compared with the one obtained by other codes for 1-D and 3-D anisotropic models. The results of this comparison lend support to the validity of our code. We also present the numerical results proving greater effectiveness of the finite-element approach proposed for calculating the 1-D field in comparison with the known codes implementing the semi-analytical methods for the case in which the field is calculated at a large number of points. Additionally, we present the numerical results which confirm the accuracy advantages of the automatic choice of a background medium for calculating the 1-D field as well as the results of 2.5-D modelling for a geoelectrical model with anisotropic layers, a fault and long tool-movement trajectory with the varying dip angle.
Performance of a completely automated system for monitoring CMV DNA in plasma.
Mengelle, C; Sandres-Sauné, K; Mansuy, J-M; Haslé, C; Boineau, J; Izopet, J
2016-06-01
Completely automated systems for monitoring CMV-DNA in plasma samples are now available. Evaluate analytical and clinical performances of the VERIS™/MDx System CMV Assay(®). Analytical performance was assessed using quantified quality controls. Clinical performance was assessed by comparison with the COBAS(®) Ampliprep™/COBAS(®) Taqman CMV test using 169 plasma samples that had tested positive with the in-house technique in whole blood. The specificity of the VERIS™/MDx System CMV Assay(®) was 99% [CI 95%: 97.7-100]. Intra-assay reproducibilities were 0.03, 0.04, 0.05 and 0.04 log10IU/ml (means 2.78, 3.70, 4.64 and 5.60 log10IU/ml) for expected values of 2.70, 3.70, 4.70 and 5.70 log10IU/ml. The inter-assay reproducibilities were 0.12 and 0.08 (means 6.30 and 2.85 log10IU/ml) for expected values of 6.28 and 2.80 log10IU/ml. The lower limit of detection was 14.6IU/ml, and the assay was linear from 2.34 to 5.58 log10IU/ml. The results for the positive samples were concordant (r=0.71, p<0.0001; slope of Deming regression 0.79 [CI 95%: 0.56-1.57] and y-intercept 0.79 [CI 95%: 0.63-0.95]). The VERIS™/MDx System CMV Assay(®) detected 18 more positive samples than did the COBAS(®) Ampliprep™/COBAS(®) Taqman CMV test and the mean virus load were higher (0.41 log10IU/ml). Patient monitoring on 68 samples collected from 17 immunosuppressed patients showed similar trends between the two assays. As secondary question, virus loads detected by the VERIS™/MDx System CMV Assay(®) were compared to those of the in-house procedure on whole blood. The results were similar between the two assays (-0.09 log10IU/ml) as were the patient monitoring trends. The performances of the VERIS™/MDx System CMV Assay(®) facilitated its routine use in monitoring CMV-DNA loads in plasma samples. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Williams, J. H.; Johnson, C. D.; Paillet, F. L.
2004-05-01
In the past, flow logging was largely restricted to the application of spinner flowmeters to determine flow-zone contributions in large-diameter production wells screened in highly transmissive aquifers. Development and refinement of tool-measurement technology, field methods, and analysis techniques has greatly extended and enhanced flow logging to include the hydraulic characterization of boreholes and aquifer flow zones at contaminated bedrock sites. State-of-the-art in flow logging will be reviewed, and its application to bedrock-contamination investigations will be presented. In open bedrock boreholes, vertical flows are measured with high-resolution flowmeters equipped with flexible rubber-disk diverters fitted to the nominal borehole diameters to concentrate flow through the measurement throat of the tools. Heat-pulse flowmeters measure flows in the range of 0.05 to 5 liters per minute, and electromagnetic flowmeters measure flows in the range of 0.3 to 30 liters per minute. Under ambient and low-rate stressed (either extraction or injection) conditions, stationary flowmeter measurements are collected in competent sections of the borehole between fracture zones identified on borehole-wall images. Continuous flow, fluid-resistivity, and temperature logs are collected under both sets of conditions while trolling with a combination electromagnetic flowmeter and fluid tool. Electromagnetic flowmeters are used with underfit diverters to measure flow rates greater than 30 liters per minute and suppress effects of diameter variations while trolling. A series of corrections are applied to the flow-log data to account for the zero-flow response, bypass, trolling, and borehole-diameter biases and effects. The flow logs are quantitatively analyzed by matching simulated flows computed with a numerical model to measured flows by varying the hydraulic properties (transmissivity and hydraulic head) of the flow zones. Several case studies will be presented that demonstrate the integration of flow logging in site-characterization activities framework; 2) evaluate cross-connection effects and determine flow-zone contributions to water-quality samples from open boreholes; and 3) design discrete-zone hydraulic tests and monitoring-well completions.
Wilderness experience in Rocky Mountain National Park 2002: Report to RMNP
Schuster, Elke; Johnson, S. Shea; Taylor, Jonathan G.
2004-01-01
The social science technique of Visitor Employed Photography [VEP] was used to obtain information from visitors about wilderness experiences. Visitors were selected at random from Park-designated wilderness trails, in proportion to their use, and asked to participate in the survey. Respondents were given single-use, 10-exposure cameras and photo-log diaries to record experiences. A total of 293 cameras were distributed, with a response rate of 87%. Following the development of the photos, a copy of the photos, two pertinent pages from the photo-log, and a follow-up survey were mailed to respondents. Fifty six percent of the follow-up surveys were returned. Findings from the two surveys were analyzed and compared.
Forest Roadidentification and Extractionof Through Advanced Log Matching Techniques
NASA Astrophysics Data System (ADS)
Zhang, W.; Hu, B.; Quist, L.
2017-10-01
A novel algorithm for forest road identification and extraction was developed. The algorithm utilized Laplacian of Gaussian (LoG) filter and slope calculation on high resolution multispectral imagery and LiDAR data respectively to extract both primary road and secondary road segments in the forest area. The proposed method used road shape feature to extract the road segments, which have been further processed as objects with orientation preserved. The road network was generated after post processing with tensor voting. The proposed method was tested on Hearst forest, located in central Ontario, Canada. Based on visual examination against manually digitized roads, the majority of roads from the test area have been identified and extracted from the process.
Multiphase gas in quasar absorption-line systems
NASA Technical Reports Server (NTRS)
Giroux, Mark L.; Sutherland, Ralph S.; Shull, J. Michael
1994-01-01
In the standard model for H I Lyman-limit (LL) quasar absorption-line systems, the absorbing matter is galactic disk and halo gas, heated and photoionized by the metagalactic radiation field produced by active galaxies. In recent Hubble Space Telescope (HST) observations (Reimers et al. 1992; Vogel & Reimers 1993; Reimers & Vogel 1993) of LL systems along the line of sight to the quasar HS 1700+6416, surprisingly high He I/H I ratios and a wide distribution of column densities of C, N, and O ions are deduced from extreme ultraviolet absorption lines. We show that these observations are incompatible with photoionization equilibrium by a single metagalactic ionizing background. We argue that these quasar absorption systems possess a multiphase interstellar medium similar to that of our Galaxy, in which extended hot, collisionally ionized gas is responsible for some or all of the high ionization stages of heavy elements. From the He/H ratios we obtain -4.0 less than or = log U less than or = -3.0, while the CNO ions are consistent with hot gas in collisional ionization equilibrium at log T = 5.3 and (O/H) = -1.6. The supernova rate necessary to produce these heavy elements and maintain the hot-gas energy budget of approximately 10(exp 41.5) ergs/s is approximately 10(exp -2)/yr, similar to that which maintains the 'three-phase' interstellar medium in our own Galaxy. As a consequence of the change in interpretation from photoionized gas to a multiphase medium, the derived heavy-element abundances (e.g., O/C) of these systems are open to question owing to substantial ionization corrections for unseen C V in the hot phase. The metal-line ratios may also lead to erroneous diagnostics of the shape of the metagalactic ionizaing spectrum and the ionizing parameter of the absorbers.
Vermeulen, Roel; Coble, Joseph B.; Yereb, Daniel; Lubin, Jay H.; Blair, Aaron; Portengen, Lützen; Stewart, Patricia A.; Attfield, Michael; Silverman, Debra T.
2010-01-01
Diesel exhaust (DE) has been implicated as a potential lung carcinogen. However, the exact components of DE that might be involved have not been clearly identified. In the past, nitrogen oxides (NOx) and carbon oxides (COx) were measured most frequently to estimate DE, but since the 1990s, the most commonly accepted surrogate for DE has been elemental carbon (EC). We developed quantitative estimates of historical exposure levels of respirable elemental carbon (REC) for an epidemiologic study of mortality, particularly lung cancer, among diesel-exposed miners by back-extrapolating 1998–2001 REC exposure levels using historical measurements of carbon monoxide (CO). The choice of CO was based on the availability of historical measurement data. Here, we evaluated the relationship of REC with CO and other current and historical components of DE from side-by-side area measurements taken in underground operations of seven non-metal mining facilities. The Pearson correlation coefficient of the natural log-transformed (Ln)REC measurements with the Ln(CO) measurements was 0.4. The correlation of REC with the other gaseous, organic carbon (OC), and particulate measurements ranged from 0.3 to 0.8. Factor analyses indicated that the gaseous components, including CO, together with REC, loaded most strongly on a presumed ‘Diesel exhaust’ factor, while the OC and particulate agents loaded predominantly on other factors. In addition, the relationship between Ln(REC) and Ln(CO) was approximately linear over a wide range of REC concentrations. The fact that CO correlated with REC, loaded on the same factor, and increased linearly in log–log space supported the use of CO in estimating historical exposure levels to DE. PMID:20876234
NASA Astrophysics Data System (ADS)
Walsh, Rory; Higton, Sam; Marshall, Jake; Bidin, Kawi; Blake, William; Nainar, Anand
2015-04-01
This paper reports some methodological issues and early results of a project investigating the erosional impacts of land use changes (multiple selective logging and progressive, partial conversion to oil palm) over the last 25-40 years in the 600km2 Brantian river catchment in Sabah, Borneo. A combined sediment fingerprinting and radioisotope dating approach is being applied to sediment cores taken in stream hierarchical fashion across the intermediate catchment scale. Changes in sediment sources and sedimentation rates over time can be captured by changes in the relative importance of geochemical elements with depth in downstream sediment cores, which in turn can be linked to parallel changes in upstream cores by the application of unmixing models and statistical techniques. Radioisotope analysis of the sediment cores allows these changes to be dated and sedimentation rates to be estimated. Work in the neighbouring Segama catchment had successfully demonstrated the potential of such an approach in a rainforest environment (Walsh et al. 2011). The paper first describes steps taken to address methodological issues. The approach relies on taking continuous sediment cores which have aggraded progressively over time and remain relatively undisturbed and uncontaminated. This issue has been tackled (1) through careful core sampling site selection with a focus on lateral bench sites and (2) deployment of techniques such as repeat-measurement erosion bridge transects to assess the contemporary nature of sedimentation to validate (or reject) candidate sites. The issue of sediment storage and uncertainties over lag times has been minimised by focussing on sets of above- and below-confluence sites in the intermediate zone of the catchment, thus minimising sediment transit times between upstream contributing and downstream destination core sites. This focus on the intermediate zone was also driven by difficulties in finding suitable core sites in the mountainous headwaters area due to the prevalence of steep, incised channels without even narrow floodplains. Preliminary results are reported from (1) a field visit to investigate potential sampling sites in July 2014 and (2) initial analysis of a sediment core at a promising lateral bench site. Marked down-profile geochemistry changes of the core indicate a history of phases of high deposition and lateral growth of the channel caused by mobilisation of sediment linked to logging and clearance upstream. Recent channel bed degradation suggests the system has been adjusting a decline in sediment supply with forest recovery since logging in 2005, but a renewed sedimentation phase heralded by > 10 cm deposition at the site in a flood in July 2014 appears to have started linked to partial forest clearance for oil palm. These preliminary results support the ability of a combined fingerprinting and dating approach to reflect the spatial history of land-use change in a catchment undergoing disturbance. Walsh R. P. D. , Bidin K., Blake W.H., Chappell N.A., Clarke M.A., Douglas I., Ghazali R., Sayer A.M., Suhaimi J., Tych W. & Annammala K.V. (2011) Long-term responses of rainforest erosional systems at different spatial scales to selective logging and climatic change. Philosophical Transactions of the Royal Society B, 366, 3340-3353.
Compact light-emitting-diode sun photometer for atmospheric optical depth measurements.
Acharya, Y B; Jayaraman, A; Ramachandran, S; Subbaraya, B H
1995-03-01
A new compact light-emitting diode (LED) sun photometer, in which a LED is used as a spectrally selective photodetector as well as a nonlinear feedback element in the operational amplifier, has been developed. The output voltage that is proportional to the logarithm of the incident solar intensity permits the direct measurement of atmospheric optical depths in selected spectral bands. Measurements made over Ahmedabad, India, show good agreement, within a few percent, of optical depths derived with a LED as a photodetector in a linear mode and with a LED as both a photodetector and a feedback element in an operational amplifier in log mode. The optical depths are also found to compare well with those obtained simultaneously with a conventional filter photometer.
(F)UV Spectroscopy of K648: Abundance Determination of Trace Elements
NASA Astrophysics Data System (ADS)
Mohamad-Yob, S. J.; Ziegler, M.; Rauch, T.; Werner, K.
2010-11-01
We present preliminary results of an ongoing spectral analysis of K 648, the central star of the planetary nebula Ps 1, based on high resolution FUV spectra. K 648, in M 15 is one of only four known PNe in globular clusters. The formation of this post-AGB object in a globular cluster is still unclear. Our aim is to determine Teff, log g, and the abundances of trace elements, in order to improve our understanding of post-AGB evolution of extremely metal-poor stars, especially PN formation in globular clusters. We analyzed FUSE, HST/STIS, and HST/FOS observations. A grid of stellar model atmospheres was calculated using the Tübingen NLTE Model Atmosphere Package (TMAP).
NASA Astrophysics Data System (ADS)
Galeev, A. I.; Berdnikova, V. M.; Ivanova, D. V.; Kudryavtsev, D. O.; Shimanskaya, N. N.; Shimansky, V. V.; Balashova, M. O.
2017-06-01
The results of a study of a sample of δ Scuti-type stars obtained from the observations with the BTA and RTT-150 are presented. Based on photometric data, we measured and analyzed the fundamental parameters of all the studied stars. For eight stars (for two of them for the first time), the fundamental parameters of the atmospheres (Teff, log g, [Fe/H]) and the chemical composition for 29 elements in the LTE-approximation are received using spectroscopic observations. The chemical composition analysis demonstrates both the solar abundances of chemical elements and the anomalies of chemical composition typical of Am stars in the studied sample of δ Scuti-type stars.
Spectroscopic investigation of stars on the lower main sequence
NASA Astrophysics Data System (ADS)
Mishenina, T. V.; Soubiran, C.; Bienaymé, O.; Korotin, S. A.; Belik, S. I.; Usenko, I. A.; Kovtyukh, V. V.
2008-10-01
Aims: The aim of this paper is to provide fundamental parameters and abundances with a high accuracy for a large sample of cool main sequence stars. This study is part of wider project, in which the metallicity distribution of the local thin disc is investigated from a complete sample of G and K dwarfs within 25 pc. Methods: The stars were observed at high resolution and a high signal-to-noise ratio with the ELODIE echelle spectrograph. The V sin i were obtained with a calibration of the cross-correlation function. Effective temperatures were estimated by the line depth ratio method. Surface gravities (log g) were determined by two methods: parallaxes and ionization balance of iron. The Mg and Na abundances were derived using a non-LTE approximation. Abundances of other elements were obtained by measuring equivalent widths. Results: Rotational velocities, atmospheric parameters (T_eff, log g, [Fe/H], V_t), and Li, O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Co, Ni, and Zn abundances are provided for 131 stars. Among them, more than 30 stars are active stars with a fraction of BY Dra and RS CVn type stars for which spectral peculiarities were investigated. We find the mean abundances of the majority of elements in active and nonactive stars to be similar, except for Li, and possibly for Zn and Co. The lithium is reliably detected in 54% of active stars but only in 20% of nonactive stars. No correlation is found between Li abundances and rotational velocities. A possible anticorrelation of log A(Li) with the index of chromospheric activity GrandS is observed. Conclusions: Active and nonactive cool dwarfs show similar dependencies of most elemental ratios vs. [Fe/H]. This allows us to use such abundance ratios to study the chemical and dynamical evolution of the Galaxy. Among active stars, no clear correlation has been found between different indicators of activity for our sample stars. Based on spectra collected with the ELODIE spectrograph at the 1.93-m telescope of the Observatoire de Haute Provence (France). Tables A.1-A3 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/489/923
L-O-S-T: Logging Optimization Selection Technique
Jerry L. Koger; Dennis B. Webster
1984-01-01
L-O-S-T is a FORTRAN computer program developed to systematically quantify, analyze, and improve user selected harvesting methods. Harvesting times and costs are computed for road construction, landing construction, system move between landings, skidding, and trucking. A linear programming formulation utilizing the relationships among marginal analysis, isoquants, and...
Setting analyst: A practical harvest planning technique
Olivier R.M. Halleux; W. Dale Greene
2001-01-01
Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...
ADULT COHO SALMON AND STEELHEAD USE OF BOULDER WEIRS IN SOUTHWEST OREGON STREAMS
The placement of log and boulder structures in streams is a common and often effective technique for improving juvenile salmonid rearing habitat and increasing fish densities. Less frequently examined has been the use of these structures by adult salmonids. In 2004, spawner densi...
Stabilization techniques for reactive aggregate in soil-cement base course.
DOT National Transportation Integrated Search
2003-01-01
Anhydrite (CaSO4) beds occur as a cap rock on a salt dome in Winn Parish in north Louisiana. Locally known as Winn Rock, it has been quarried for gravel for road building. It has been used as a surface course for local parish and logging roads. Stabi...
Developing attractants and trapping techniques for the emerald ash borer
Therese M. Poland; Peter de Groot; Gary Grant; Linda MacDonald; Deborah G. McCullough
2003-01-01
Shortly after the 2002 discovery of emerald ash borer (EAB), Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), in southeastern Michigan and Windsor, Ontario, quarantines regulating the movement of ash logs, firewood, and nursery stock were established to reduce the risk of human-assisted spread of this exotic forest insect pest. Accurate...
Applying the Ce-in-zircon oxygen geobarometer to diverse silicic magmatic systems
NASA Astrophysics Data System (ADS)
Claiborne, L. L.; Miller, C. F.
2012-12-01
Zircon provides information on age, temperature, and composition of the magma from which it grew. In systems such as Mount St. Helens, where zircon is not coeval with the rest of the crystal cargo, it provides the only accessible record of the extended history of the magmatic system, including cycles of intrusion, crystallization and rejuvenation beneath an active volcano (Claiborne et al., 2010). The rare earth elements, which are present in measureable quantities in zircon, provide information about the composition of the magma from which zircon grew. Unique among the generally trivalent rare earth elements, cerium can exist as either trivalent or tetravalent, depending on the oxidation state of the magma. The tetravalent ion is highly compatible in zircon, in the site that usually hosts tetravalent zirconium, and so the amount of Cerium in zircon relative (relative to what would be expected of trivalent Ce) depends the oxidation state of the magma from which it grew. Trail et al. (2011) proposed a calibration based on experimental data that uses the Ce anomaly in zircon as a direct proxy for magma oxidation (fugacity), describing the relationship between Ce in zircon and magma oxygen fugacity as ln(Ce/Ce*)D = (0.1156±0.0050)xln(fO2)+(13860±708)/T-(6.125±0.484). For systems like Mount St. Helens, where the major minerals record only events in the hundreds to thousands of years leading to eruption, (including the Fe-Ti oxides traditionally relied upon for records of oxidation state of the magmas), this presents a novel approach for understanding more extended histories of oxidation of magmas in the tens and hundreds of thousands of years of magmatism at a volcanic center. This calibration also promises to help us better constrain conditions of crystallization in intrusive portions of volcanic systems, as well as plutonic bodes. We apply this new oxygen geobarometer to natural volcanic and plutonic zircons from a variety of tectonic settings, and compare to existing indicators of oxidation state for each system, as available. Zircons included this study are from Mount St. Helens (ΔNNO +1.5 log units; Smith, 1984), the Peach Spring Tuff and Spirit Mountain Batholith (sphene-bearing, silicic, Miocene-aged rocks from the Colorado River Extensional Corridor), Alid Volcano in Eritrea, and rhyolites and granites from Iceland. Median log fO2 for these systems, calculated from the Cerium anomaly in zircons following Trail et al. (2011) using temperatures from Ti-in-zircon thermometry (Ferry and Watson, 2007) are as follows: Alid -12 bars (ΔNNO +3 log units) at 750 degrees C; Iceland -11 bars (ΔNNO +3 log units) at 800 degrees C; Mount St. Helens -8.6 bars (ΔNNO +6 log units) at 750 degrees C; Peach Spring Tuff -3.4 (ΔNNO +10 log units) at 830 degrees C. While ubiquitous sphene in the Spirit Mountain granites suggest relatively high fO2, calculations based on the cerium anomaly in zircon suggest median log fO2 of >0 at 770 degrees C, which is certainly erroneous. While median values for our natural zircons are, for the most part, above expected fugacities for each system when compared with other indicators, and extreme values for each system are almost certainly erroneous, many are within expected values for terrestrial magmas and they vary relative to one another as might be expected given the magma types and tectonic settings.
NASA Technical Reports Server (NTRS)
Tag, I. A.; Lumsdaine, E.
1978-01-01
The general non-linear three-dimensional equation for acoustic potential is derived by using a perturbation technique. The linearized axisymmetric equation is then solved by using a finite element algorithm based on the Galerkin formulation for a harmonic time dependence. The solution is carried out in complex number notation for the acoustic velocity potential. Linear, isoparametric, quadrilateral elements with non-uniform distribution across the duct section are implemented. The resultant global matrix is stored in banded form and solved by using a modified Gauss elimination technique. Sound pressure levels and acoustic velocities are calculated from post element solutions. Different duct geometries are analyzed and compared with experimental results.
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Trace elemental analysis of Indian natural moonstone gems by PIXE and XRD techniques.
Venkateswara Rao, R; Venkateswarulu, P; Kasipathi, C; Sivajyothi, S
2013-12-01
A selected number of Indian Eastern Ghats natural moonstone gems were studied with a powerful nuclear analytical and non-destructive Proton Induced X-ray Emission (PIXE) technique. Thirteen elements, including V, Co, Ni, Zn, Ga, Ba and Pb, were identified in these moonstones and may be useful in interpreting the various geochemical conditions and the probable cause of their inceptions in the moonstone gemstone matrix. Furthermore, preliminary XRD studies of different moonstone patterns were performed. The PIXE technique is a powerful method for quickly determining the elemental concentration of a substance. A 3MeV proton beam was employed to excite the samples. The chemical constituents of moonstones from parts of the Eastern Ghats geological formations of Andhra Pradesh, India were determined, and gemological studies were performed on those gems. The crystal structure and the lattice parameters of the moonstones were estimated using X-Ray Diffraction studies, trace and minor elements were determined using the PIXE technique, and major compositional elements were confirmed by XRD. In the present work, the usefulness and versatility of the PIXE technique for research in geo-scientific methodology is established. © 2013 Elsevier Ltd. All rights reserved.
Application of digital image processing techniques to astronomical imagery, 1979
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1979-01-01
Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.
Vernon, J.H.; Paillet, F.L.; Pedler, W.H.; Griswold, W.J.
1993-01-01
Wellbore geophysical techniques were used to characterize fractures and flow in a bedrock aquifer at a site near Blackwater Brook in Dover, New Hampshire. The primary focus ofthis study was the development of a model to assist in evaluating the area surrounding a planned water supply well where contaminants introduced at the land surface might be induced to flow towards a pumping well. Well logs and geophysical surveys used in this study included lithologic logs based on examination of cuttings obtained during drilling; conventional caliper and natural gamma logs; video camera and acoustic televiewer surveys; highresolution vertical flow measurements under ambient conditions and during pumping; and borehole fluid conductivity logs obtained after the borehole fluid was replaced with deionized water. These surveys were used for several applications: 1) to define a conceptual model of aquifer structure to be used in groundwater exploration; 2) to estimate optimum locations for test and observation wells; and 3) to delineate a wellhead protection area (WHPA) for a planned water supply well. Integration of borehole data with surface geophysical and geological mapping data indicated that the study site lies along a northeast-trending intensely fractured contact zone between surface exposures of quartz monzonite and metasedimentary rocks. Four of five bedrock boreholes at the site were estimated to produce more than 150 gallons per minute (gpm) (568 L/min) of water during drilling. Aquifer testing and other investigations indicated that water flowed to the test well along fractures parallel to the northeast-trending contact zone and along other northeast and north-northwest-trending fractures. Statistical plots of fracture strikes showed frequency maxima in the same northeast and north-northwest directions, although additional maxima occurred in other directions. Flowmeter surveys and borehole fluid conductivity logging after fluid replacement were used to identify water-producing zones in the boreholes; fractures associated with inflow into boreholes showed a dominant northeast orientation. Borehole fluid conductivity logging after fluid replacement also gave profiles of such water-quality parameters as fluid electrical conductivity (FEC), pH, temperature, and oxidation-reduction potential, strengthening the interpretation of crossconnection of boreholes by certain fracture zones. The results of this study showed that the application of these borehole geophysical techniques at the Blackwater Brook site led to an improved understanding of such parameters as fracture location, attitude, flow direction and velocity, and water quality; all of which are important in the determination of a WHPA.
Finite Elements Analysis of a Composite Semi-Span Test Article With and Without Discrete Damage
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)
2000-01-01
AS&M Inc. performed finite element analysis, with and without discrete damage, of a composite semi-span test article that represents the Boeing 220-passenger transport aircraft composite semi-span test article. A NASTRAN bulk data file and drawings of the test mount fixtures and semi-span components were utilized to generate the baseline finite element model. In this model, the stringer blades are represented by shell elements, and the stringer flanges are combined with the skin. Numerous modeling modifications and discrete source damage scenarios were applied to the test article model throughout the course of the study. This report details the analysis method and results obtained from the composite semi-span study. Analyses were carried out for three load cases: Braked Roll, LOG Down-Bending and 2.5G Up-Bending. These analyses included linear and nonlinear static response, as well as linear and nonlinear buckling response. Results are presented in the form of stress and strain plots. factors of safety for failed elements, buckling loads and modes, deflection prediction tables and plots, and strainage prediction tables and plots. The collected results are presented within this report for comparison to test results.
Liu, Changgeng; Djuth, Frank T.; Zhou, Qifa; Shung, K. Kirk
2014-01-01
Several micromachining techniques for the fabrication of high-frequency piezoelectric composite ultrasonic array transducers are described in this paper. A variety of different techniques are used in patterning the active piezoelectric material, attaching backing material to the transducer, and assembling an electronic interconnection board for transmission and reception from the array. To establish the feasibility of the process flow, a hybrid test ultrasound array transducer consisting of a 2-D array having an 8 × 8 element pattern and a 5-element annular array was designed, fabricated, and assessed. The arrays are designed for a center frequency of ~60 MHz. The 2-D array elements are 105 × 105 μm in size with 5-μm kerfs between elements. The annular array surrounds the square 2-D array and provides the option of transmitting from the annular array and receiving with the 2-D array. Each annular array element has an area of 0.71 mm2 with a 16-μm kerf between elements. The active piezoelectric material is (1 − x) Pb(Mg1/3Nb2/3)O3−xPbTiO3 (PMN-PT)/epoxy 1–3 composite with a PMN-PT pillar lateral dimension of 8 μm and an average gap width of ~4 μm, which was produced by deep reactive ion etching (DRIE) dry etching techniques. A novel electric interconnection strategy for high-density, small-size array elements was proposed. After assembly, the array transducer was tested and characterized. The capacitance, pulse–echo responses, and crosstalk were measured for each array element. The desired center frequency of ~60 MHz was achieved and the −6-dB bandwidth of the received signal was ~50%. At the center frequency, the crosstalk between adjacent 2-D array elements was about −33 dB. The techniques described herein can be used to build larger arrays containing smaller elements. PMID:24297027
Examining the Possibility of Carbon as a Light Element in the Core of Mercury
NASA Technical Reports Server (NTRS)
Vander Kaaden, Kathleen; McCubbin, Francis M.; Turner, Amber; Ross, D. Kent
2017-01-01
Results from the MErcury Surface, Space ENvironment, GEochemistry and Ranging (MESSENGER) spacecraft have shown elevated abundances of C on the surface of Mercury. Peplowski et al. used GRS data from MESSENGER to show an average northern hemisphere abundance of C on the planet of 0 to 4.1 wt% C at the three-sigma detection limit. Confirmation of C on the planet prompts many questions regarding the role of C during the differentiation and evolution of Mercury. The elevated abundances of both S and C on Mercury's surface, coupled with the low abundances of iron, suggest that the oxygen fugacity of the planet is several log10 units below the Iron-Wustite buffer. These observations spark questions about the bulk composition of Mercury's core. This experimental study seeks to understand the impact of C as a light element on potential mercurian core compositions. In order to address this question, experiments were conducted at 1 GPa and a variety of temperatures (700 - 1500 C) on metal compositions ranging from Si5Fe95 to Si22Fe78, possibly representative of the mercurian core. All starting metals were completely enclosed in a graphite capsule to ensure C saturation at a given set of run conditions. All elements, including C, were analyzed using electron probe microanalysis. Precautions were taken to ensure accurate measurements of C with this technique including using the LDE2 crystal, the cold finger on the microprobe to minimize contamination and increase the vacuum, and an instrument with no oil based pumps. Based on the superliquidus experimental results in the present study, as Fe-rich cores become more Si-rich, the C content of that core composition will decrease. Furthermore, although C concentration at graphite saturation (CCGS) varies from a liquid to a solid, temperature does not seem to play a substantial role in CCGS, at least at 1 GPa.
Finite element modeling of truss structures with frequency-dependent material damping
NASA Technical Reports Server (NTRS)
Lesieutre, George A.
1991-01-01
A physically motivated modelling technique for structural dynamic analysis that accommodates frequency dependent material damping was developed. Key features of the technique are the introduction of augmenting thermodynamic fields (AFT) to interact with the usual mechanical displacement field, and the treatment of the resulting coupled governing equations using finite element analysis methods. The AFT method is fully compatible with current structural finite element analysis techniques. The method is demonstrated in the dynamic analysis of a 10-bay planar truss structure, a structure representative of those contemplated for use in future space systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriz, Jaroslav; Dybal, Jiri; Vanura, Petr
2011-01-01
Using 1H, 13C, and 133Cs NMR spectra, it is shown that calix[4]arene-bis (t-octylbenzo-18-crown-6) (L) forms complexes with one (L 3 Cs ) and two (L 3 2Cs ) Cs ions offered by cesium bis(1,2-dicarbollide) cobaltate (CsDCC) in nitrobenzene-d5. The ions interact with all six oxygen atoms in the crown-ether ring and the electrons of the calixarene aromatic moieties. According to extraction technique, the stability constant of the first complex is log nb(L 3 Cs ) = 8.8 ( 0.1. According to 133Cs NMR spectra, the value of the equilibrium constant of the second complex is log Knb (2)(L 3 2Csmore » ) = 6.3(0.2, i.e., its stabilization constant is log nb(L 3 2Cs ) = 15.1 ( 0.3. Self-diffusion measurements by 1H pulsed-field gradient (PFG) NMRcombined with density functional theory (DFT) calculations suggest that one DCC ion is tightly associated with L 3 Cs , decreasing its positive charge and consequently stabilizing the second complex, L 3 2Cs . Using a saturation-transfer 133Cs NMR technique, the correlation times ex of chemical exchange between L 3 Cs and L 3 2Cs as well as between L 3 2Cs and free Cs ions were determined as 33.6 and 29.2 ms, respectively.« less
Process mining techniques: an application to time management
NASA Astrophysics Data System (ADS)
Khowaja, Ali Raza
2018-04-01
In an environment people have to make sure that all of their work are completed within a given time in accordance with its quality. In order to achieve the real phenomenon of process mining one needs to understand all of these processes in a detailed manner. Personal Information and communication has always been a highlighting issue on internet but for now information and communication tools within factual life refers to their daily schedule, location analysis, environmental analysis and, more generally, social media applications support these systems which makes data available for data analysis generated through event logs, but also for process analysis which combines environmental and location analysis. Process mining can be used to exploit all these real live processes with the help of the event logs which are already available in those datasets through user censored data or may be user labeled data. These processes could be used to redesign a user's flow and understand all these processes in a bit more detailed manner. In order to increase the quality of each of the processes that we go through our daily lives is to give a closer look to each of the processes and after analyzing them, one should make changes to get better results. On the contrarily, we applied process mining techniques on seven different subjects combined in a single dataset collected from Korea. Above all, the following paper comments on the efficiency of processes in the event logs referring to time management's sphere of influence.
Atmospheric stellar parameters from cross-correlation functions
NASA Astrophysics Data System (ADS)
Malavolta, L.; Lovis, C.; Pepe, F.; Sneden, C.; Udry, S.
2017-08-01
The increasing number of spectra gathered by spectroscopic sky surveys and transiting exoplanet follow-up has pushed the community to develop automated tools for atmospheric stellar parameters determination. Here we present a novel approach that allows the measurement of temperature (Teff), metallicity ([Fe/H]) and gravity (log g) within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, our technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. We use literature stellar parameters of high signal-to-noise (SNR), high-resolution HARPS spectra of FGK main-sequence stars to calibrate Teff, [Fe/H] and log g as a function of CCF parameters. Our technique is validated using low-SNR spectra obtained with the same instrument. For FGK stars we achieve a precision of σ _{{T_eff}} = 50 K, σlog g = 0.09 dex and σ _{{{[Fe/H]}}} =0.035 dex at SNR = 50, while the precision for observation with SNR ≳ 100 and the overall accuracy are constrained by the literature values used to calibrate the CCFs. Our approach can easily be extended to other instruments with similar spectral range and resolution or to other spectral range and stars other than FGK dwarfs if a large sample of reference stars is available for the calibration. Additionally, we provide the mathematical formulation to convert synthetic equivalent widths to CCF parameters as an alternative to direct calibration. We have made our tool publicly available.
Lee, Myung W.; Collett, Timothy S.
2005-01-01
Physical properties of gas-hydrate-bearing sediments depend on the pore-scale interaction between gas hydrate and porous media as well as the amount of gas hydrate present. Well log measurements such as proton nuclear magnetic resonance (NMR) relaxation and electromagnetic propagation tool (EPT) techniques depend primarily on the bulk volume of gas hydrate in the pore space irrespective of the pore-scale interaction. However, elastic velocities or permeability depend on how gas hydrate is distributed in the pore space as well as the amount of gas hydrate. Gas-hydrate saturations estimated from NMR and EPT measurements are free of adjustable parameters; thus, the estimations are unbiased estimates of gas hydrate if the measurement is accurate. However, the amount of gas hydrate estimated from elastic velocities or electrical resistivities depends on many adjustable parameters and models related to the interaction of gas hydrate and porous media, so these estimates are model dependent and biased. NMR, EPT, elastic-wave velocity, electrical resistivity, and permeability measurements acquired in the Mallik 5L-38 well in the Mackenzie Delta, Canada, show that all of the well log evaluation techniques considered provide comparable gas-hydrate saturations in clean (low shale content) sandstone intervals with high gas-hydrate saturations. However, in shaly intervals, estimates from log measurement depending on the pore-scale interaction between gas hydrate and host sediments are higher than those estimates from measurements depending on the bulk volume of gas hydrate.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
Highly Reducing Partitioning Experiments Relevant to the Planet Mercury
NASA Technical Reports Server (NTRS)
Rowland, Rick, II; Vander Kaaden, Kathleen E.; McCubbin, Francis M.; Danielson, Lisa R.
2017-01-01
With the data returned from the MErcury Surface Space ENvironment GEochemistry and Ranging (MESSENGER) mission, there are now numerous constraints on the physical and chemical properties of Mercury, including its surface composition. The high S and low FeO contents observed from MESSENGER on the planet's surface suggests a low oxygen fugacity of the present planetary materials. Estimates of the oxygen fugacity for Mercurian magmas are approximately 3-7 log units below the Iron-Wüstite (Fe-FeO) oxygen buffer, several orders of magnitude more reducing than other terrestrial bodies we have data from such as the Earth, Moon, or Mars. Most of our understanding of elemental partitioning behavior comes from observations made on terrestrial rocks, but Mercury's oxygen fugacity is far outside the conditions of those samples. With limited oxygen available, lithophile elements may instead exhibit chalcophile, halophile, or siderophile behaviors. Furthermore, very few natural samples of rocks that formed under reducing conditions are available in our collections (e.g., enstatite chondrites, achondrites, aubrites). With this limited amount of material, we must perform experiments to determine the elemental partitioning behavior of typically lithophile elements as a function of decreasing oxygen fugacity. Experiments are being conducted at 4 GPa in an 880-ton multi-anvil press, at temperatures up to 1850degC. The composition of starting materials for the experiments were selected for the final run products to contain metal, silicate melt, and sulfide melt phases. Oxygen fugacity is controlled in the experiments by adding silicon metal to the samples, using the Si-SiO2 oxygen buffer, which is approximately 5 log units more reducing than the Fe-FeO oxygen buffer at our temperatures of interest. The target silicate melt compositional is diopside (CaMgSi2O6) because measured surface compositions indicate partial melting of a pyroxene-rich mantle. Elements detected on Mercury's surface by MESSENGER (K, Na, Fe, Ti, Cl, Al, Cr, Mn, U, Th) and other geochemically relevant elements (P, F, H, N, C, Co, Ni, Mo, Ce, Nd, Sm, Eu, Gd, Dy, Yb) are added to the starting composition at trace abundances (approximately 500 ppm) so that they are close enough to infinite dilution to follow Henry's law of trace elements, and their partitioning behavior can be measured between the metal, silicate, and sulfide phases. The results of these experiments will allow us to assess the thermal and magmatic evolution of the planet Mercury from a geochemical standpoint.
2017-10-01
Decreases Hospital Stay, Improves Mental Health , and Physical Performance 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Oscar E. Suman, PhD...Multicenter Study of the Effect of In-Patient Exercise Training on Length of Hospitalization, Mental Health , and Physical Performance in Burned...Intensive Care Unit Decreases Hospital Stay, Improves Mental Health , and Physical Performance,” Proposal Log Number 13214039, Award Number W81XWH-14
Method of Individual Forecasting of Technical State of Logging Machines
NASA Astrophysics Data System (ADS)
Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.
2018-03-01
Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.
Antisense RNA that Affects Rhodopseudomonas palustris Quorum-Sensing Signal Receptor Expression
2012-01-01
antisense molecules were produced, we performed a Northern blot analysis with RNA harvested from wild-type and rpaR-mutant R. palustris cells by using...aeruginosa, cells were grown to late-log phase, harvested by cen- trifugation, suspended in SDS/PAGE buffer, and lysed by boiling and sonication. Cell...a selectable DNA fragment. Gene 29:303–313. 17. Egland KA, Greenberg EP (1999) Quorum sensing in Vibrio fischeri: Elements of the luxl promoter. Mol
NASA Astrophysics Data System (ADS)
Sokołowska, B.; Skąpska, S.; Fonberg-Broczek, M.; Niezgoda, J.; Chotkiewicz, M.; Dekowska, A.; Rzoska, S. J.
2013-03-01
Alicyclobacillus acidoterrestris, a thermoacidophilic and spore-forming bacterium, survives the typical pasteurization process and can cause the spoilage of juices, producing compounds associated with disinfectant-like odour (guaiacol, 2,6 - dibromophenol, 2,6 - dichlorophenol). Therefore, the use of other more effective techniques such as high hydrostatic pressure (HHP) is considered for preserving juices. The aim of this study was to search for factors affecting the resistance of A. acidoterrestris spores to HHP. The baroprotective effect of increased solute concentration in apple juice on A. acidoterrestris spores during high pressure processing was observed. During the 45 min pressurization (200 MPa, 50°C) of the spores in concentrated apple juice (71.1°Bx), no significant changes were observed in their number. However, in the juices with a soluble solids content of 35.7, 23.6 and 11.2°Bx, the reduction in spores was 1.3-2.4 log, 2.6-3.3 log and 2.8-4.0 log, respectively. No clear effect of age of spores on the survival under high pressure conditions was found. Spores surviving pressurization and subjected to subsequent HHP treatment showed increased resistance to pressure, by even as much as 2.0 log.
Chouteau, Philippe
2009-06-01
Two ground-dwelling couas species, Coquerel's Coua Coua coquereli and Giant Coua Coua gigas, live in sympatry in the dry forest of Madagascar. These birds are typically insectivorous and mainly feed at ground level. The two species differ by size but have the same morphology, suggesting they have the same physical attributes for foraging and prey capture. To test if the two species have the same foraging behaviour, and also to know how habitat disturbance due to logging could affect their foraging behaviour, I compared and analysed the foraging strategies of both species in two different dry forest habitats: unlogged and logged. The two species differed in their foraging behaviour between the two habitats, mainly by the ability to climb in the vegetation, and by the technique used by both species. Coquerel's Coua used more often gleaning and probing in the unlogged forest, while Giant Coua used lunge more often in this habitat. The giant Coua used also more often leaves as a substrate in the logged forest. Some modifications in the diet have been recorded too. These results suggest that anthropogenic disturbance of forest does influence the foraging behaviour of the terrestrial couas species living in the dry forest in Madagascar.
Pilger, Daniel; von Sonnleithner, Christoph; Bertelmann, Eckart; Joussen, Antonia M; Torun, Necip
2016-10-01
To explore the feasibility of femtosecond laser-assisted descemetorhexis (DR) to facilitate Descemet membrane endothelial keratoplasty (DMEK) surgery. Six pseudophakic patients suffering from Fuchs' endothelial dystrophy underwent femtosecond laser-assisted DMEK surgery. DR was performed using the LenSx femtosecond laser, followed by manual removal of the Descemet membrane. Optical coherence tomography images were used to measure DR parameters. Patients were followed up for 1 month to examine best corrected visual acuity, endothelial cell loss, flap detachment, and structure of the anterior chamber of the eye. The diameter of the DR approximated the intended diameter closely [mean error of 34 μm (0.45%) and 54 μm (0.67%) in the x- and y-diameter, respectively] and did not require manual correction. The median visual acuity increased from 0.4 logMAR (range 0.6-0.4 logMAR) preoperative to 0.2 logMAR (range 0-0.4 logMAR) postoperative. The median endothelial cell loss was 22% (range 7%-34%). No clinically significant flap detachments were noted. All patients had clear corneas after surgery, and no side effects or damage to structures of the anterior chamber were noted. Femtosecond laser-assisted DR is a safe and precise method for facilitating DMEK surgery.
Sludge bio-drying: Effective to reduce both antibiotic resistance genes and mobile genetic elements.
Zhang, Junya; Sui, Qianwen; Tong, Juan; Buhe, Chulu; Wang, Rui; Chen, Meixue; Wei, Yuansong
2016-12-01
Sewage sludge is considered as one of major contributors to the increased environmental burden of ARGs. Sludge bio-drying was increasingly adopted due to its faster sludge reduction compared with composting. The fate of ARGs during full-scale sludge bio-drying was investigated to determine whether it could effectively reduce ARGs, and the contributions of bacterial community, horizontal gene transfer (HGT) through mobile genetic elements (MGEs) and co-selection from heavy metals to ARGs profiles were discussed in detail. Two piles with different aeration strategies (Pile I, the improved and Pile II, the control) were operated to elucidate effects of aeration strategy on ARGs profiles. Results showed that sludge bio-drying could effectively reduce both most of targeted ARGs (0.4-3.1 logs) and MGEs (0.8-3.3 logs) by the improved aeration strategy, which also enhanced both the sludge bio-drying performance and ARGs reduction. The enrichment of ARGs including ermF, tetX and sulII could be well explained by the evolution of bioavailable heavy metals, not HGT through MGEs, and their potential host bacteria mainly existed in Bacteroidetes. Although changes of bacterial community contributed the most to ARGs profiles, HGT through MGEs should be paid more attention especially in the thermophilic stage of sludge bio-drying. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shao, Shuai; Hu, Bifeng; Fu, Zhiyi; Wang, Jiayu; Lou, Ge; Zhou, Yue; Jin, Bin; Li, Yan; Shi, Zhou
2018-06-12
Trace elements pollution has attracted a lot of attention worldwide. However, it is difficult to identify and apportion the sources of multiple element pollutants over large areas because of the considerable spatial complexity and variability in the distribution of trace elements in soil. In this study, we collected total of 2051 topsoil (0⁻20 cm) samples, and analyzed the general pollution status of soils from the Yangtze River Delta, Southeast China. We applied principal component analysis (PCA), a finite mixture distribution model (FMDM), and geostatistical tools to identify and quantitatively apportion the sources of seven kinds of trace elements (chromium (Cr), cadmium (Cd), mercury (Hg), copper (Cu), zinc (Zn), nickel (Ni), and arsenic (As)) in soil. The PCA results indicated that the trace elements in soil in the study area were mainly from natural, multi-pollutant and industrial sources. The FMDM also fitted three sub log-normal distributions. The results from the two models were quite similar: Cr, As, and Ni were mainly from natural sources caused by parent material weathering; Cd, Cu, and Zu were mainly from mixed sources, with a considerable portion from anthropogenic activities such as traffic pollutants, domestic garbage, and agricultural inputs, and Hg was mainly from industrial wastes and pollutants.
Techniques for forced response involving discrete nonlinearities. I - Theory. II - Applications
NASA Astrophysics Data System (ADS)
Avitabile, Peter; Callahan, John O.
Several new techniques developed for the forced response analysis of systems containing discrete nonlinear connection elements are presented and compared to the traditional methods. In particular, the techniques examined are the Equivalent Reduced Model Technique (ERMT), Modal Modification Response Technique (MMRT), and Component Element Method (CEM). The general theory of the techniques is presented, and applications are discussed with particular reference to the beam nonlinear system model using ERMT, MMRT, and CEM; frame nonlinear response using the three techniques; and comparison of the results obtained by using the ERMT, MMRT, and CEM models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatt, A.
The 60th anniversary of the discovery of neutron activation analysis (NAA) by Hevesy and Levi is being celebrated in 1996. With the availability of nuclear reactors capable of producing fluxes of the order of 10{sup 12} to 10{sup 14} n/cm{sup 2}s, the development of high-resolution and high-efficiency conventional and anticoincidence gamma-ray detectors, multichannel pulse-height analyzers, and personal computer-based softwares, NAA has become an extremely valuable analytical technique, especially for the simultaneous determinations of multielement concentrations. This technique can be used in a number of ways, depending on the nature of the matrix, the major elements in the sample, and onmore » the elements of interest. In most cases, several elements can be determined without any chemical pretreatment of the sample; the technique is then called instrumental NAA (INAA). In other cases, an element can be concentrated from an interfering matrix prior to irradiation; the technique is then termed preconcentration NAA (PNAA). In opposite instances, the irradiation is followed by a chemical separation of the desired element; the technique is then called radiochemical NAA (RNAA). All three forms of NAA can provide elemental concentrations of high accuracy and precision with excellent sensitivity. The number of research reactors in developing countries has increased steadily from 17 in 1955 through 71 in 1975 to 89 in 1995. Low flux reactors such as SLOWPOKE and the Chinese MNSR are primarily used for NAA.« less