Sample records for method produces consistent

  1. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  2. Electrodeposition of biaxially textured layers on a substrate

    DOEpatents

    Bhattacharya, Raghu N; Phok, Sovannary; Spagnol, Priscila; Chaudhuri, Tapas

    2013-11-19

    Methods of producing one or more biaxially textured layer on a substrate, and articles produced by the methods, are disclosed. An exemplary method may comprise electrodepositing on the substrate a precursor material selected from the group consisting of rare earths, transition metals, actinide, lanthanides, and oxides thereof. An exemplary article (150) may comprise a biaxially textured base material (130), and at least one biaxially textured layer (110) selected from the group consisting of rare earths, transition metals, actinides, lanthanides, and oxides thereof. The at least one biaxially textured layer (110) is formed by electrodeposition on the biaxially textured base material (130).

  3. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  4. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  5. Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    1991-01-01

    A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.

  6. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.

    2013-01-01

    Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345

  7. Improved Methods to Produce Tissue-Engineered Skin Substitutes Suitable for the Permanent Closure of Full-Thickness Skin Injuries

    PubMed Central

    Larouche, Danielle; Cantin-Warren, Laurence; Desgagné, Maxime; Guignard, Rina; Martel, Israël; Ayoub, Akram; Lavoie, Amélie; Gauvin, Robert; Auger, François A.; Moulin, Véronique J.; Germain, Lucie

    2016-01-01

    Abstract There is a clinical need for skin substitutes to replace full-thickness skin loss. Our group has developed a bilayered skin substitute produced from the patient's own fibroblasts and keratinocytes referred to as Self-Assembled Skin Substitute (SASS). After cell isolation and expansion, the current time required to produce SASS is 45 days. We aimed to optimize the manufacturing process to standardize the production of SASS and to reduce production time. The new approach consisted in seeding keratinocytes on a fibroblast-derived tissue sheet before its detachment from the culture plate. Four days following keratinocyte seeding, the resulting tissue was stacked on two fibroblast-derived tissue sheets and cultured at the air–liquid interface for 10 days. The resulting total production time was 31 days. An alternative method adapted to more contractile fibroblasts was also developed. It consisted in adding a peripheral frame before seeding fibroblasts in the culture plate. SASSs produced by both new methods shared similar histology, contractile behavior in vitro and in vivo evolution after grafting onto mice when compared with SASSs produced by the 45-day standard method. In conclusion, the new approach for the production of high-quality human skin substitutes should allow an earlier autologous grafting for the treatment of severely burned patients. PMID:27872793

  8. Controlling microbial contamination during hydrolysis of AFEX-pretreated corn stover and switchgrass: Effects on hydrolysate composition, microbial response and fermentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serate, Jose; Xie, Dan; Pohlmann, Edward

    Microbial conversion of lignocellulosic feedstocks into biofuels remains an attractive means to produce sustainable energy. It is essential to produce lignocellulosic hydrolysates in a consistent manner in order to study microbial performance in different feedstock hydrolysates. Because of the potential to introduce microbial contamination from the untreated biomass or at various points during the process, it can be difficult to control sterility during hydrolysate production. In this study, we compared hydrolysates produced from AFEX-pretreated corn stover and switchgrass using two different methods to control contamination: either by autoclaving the pretreated feedstocks prior to enzymatic hydrolysis, or by introducing antibiotics duringmore » the hydrolysis of non-autoclaved feedstocks. We then performed extensive chemical analysis, chemical genomics, and comparative fermentations to evaluate any differences between these two different methods used for producing corn stover and switchgrass hydrolysates. Autoclaving the pretreated feedstocks could eliminate the contamination for a variety of feedstocks, whereas the antibiotic gentamicin was unable to control contamination consistently during hydrolysis. Compared to the addition of gentamicin, autoclaving of biomass before hydrolysis had a minimal effect on mineral concentrations, and showed no significant effect on the two major sugars (glucose and xylose) found in these hydrolysates. However, autoclaving elevated the concentration of some furanic and phenolic compounds. Chemical genomics analyses using Saccharomyces cerevisiae strains indicated a high correlation between the AFEX-pretreated hydrolysates produced using these two methods within the same feedstock, indicating minimal differences between the autoclaving and antibiotic methods. Comparative fermentations with S. cerevisiae and Zymomonas mobilis also showed that autoclaving the AFEX-pretreated feedstocks had no significant effects on microbial performance in these hydrolysates. In conclusion, our results showed that autoclaving the pretreated feedstocks offered advantages over the addition of antibiotics for hydrolysate production. The autoclaving method produced a more consistent quality of hydrolysate.« less

  9. Controlling microbial contamination during hydrolysis of AFEX-pretreated corn stover and switchgrass: Effects on hydrolysate composition, microbial response and fermentation

    DOE PAGES

    Serate, Jose; Xie, Dan; Pohlmann, Edward; ...

    2015-11-14

    Microbial conversion of lignocellulosic feedstocks into biofuels remains an attractive means to produce sustainable energy. It is essential to produce lignocellulosic hydrolysates in a consistent manner in order to study microbial performance in different feedstock hydrolysates. Because of the potential to introduce microbial contamination from the untreated biomass or at various points during the process, it can be difficult to control sterility during hydrolysate production. In this study, we compared hydrolysates produced from AFEX-pretreated corn stover and switchgrass using two different methods to control contamination: either by autoclaving the pretreated feedstocks prior to enzymatic hydrolysis, or by introducing antibiotics duringmore » the hydrolysis of non-autoclaved feedstocks. We then performed extensive chemical analysis, chemical genomics, and comparative fermentations to evaluate any differences between these two different methods used for producing corn stover and switchgrass hydrolysates. Autoclaving the pretreated feedstocks could eliminate the contamination for a variety of feedstocks, whereas the antibiotic gentamicin was unable to control contamination consistently during hydrolysis. Compared to the addition of gentamicin, autoclaving of biomass before hydrolysis had a minimal effect on mineral concentrations, and showed no significant effect on the two major sugars (glucose and xylose) found in these hydrolysates. However, autoclaving elevated the concentration of some furanic and phenolic compounds. Chemical genomics analyses using Saccharomyces cerevisiae strains indicated a high correlation between the AFEX-pretreated hydrolysates produced using these two methods within the same feedstock, indicating minimal differences between the autoclaving and antibiotic methods. Comparative fermentations with S. cerevisiae and Zymomonas mobilis also showed that autoclaving the AFEX-pretreated feedstocks had no significant effects on microbial performance in these hydrolysates. In conclusion, our results showed that autoclaving the pretreated feedstocks offered advantages over the addition of antibiotics for hydrolysate production. The autoclaving method produced a more consistent quality of hydrolysate.« less

  10. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.

  11. 14 CFR 29.605 - Fabrication methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fabrication methods. 29.605 Section 29.605... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process...

  12. 14 CFR 29.605 - Fabrication methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fabrication methods. 29.605 Section 29.605... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process...

  13. 14 CFR 29.605 - Fabrication methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fabrication methods. 29.605 Section 29.605... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process...

  14. 14 CFR 29.605 - Fabrication methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fabrication methods. 29.605 Section 29.605... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process...

  15. 14 CFR 29.605 - Fabrication methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fabrication methods. 29.605 Section 29.605... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction General § 29.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process...

  16. Does Methodological Guidance Produce Consistency? A Review of Methodological Consistency in Breast Cancer Utility Value Measurement in NICE Single Technology Appraisals.

    PubMed

    Rose, Micah; Rice, Stephen; Craig, Dawn

    2018-06-01

    Since 2004, National Institute for Health and Care Excellence (NICE) methodological guidance for technology appraisals has emphasised a strong preference for using the validated EuroQol 5-Dimensions (EQ-5D) quality-of-life instrument, measuring patient health status from patients or carers, and using the general public's preference-based valuation of different health states when assessing health benefits in economic evaluations. The aim of this study was to review all NICE single technology appraisals (STAs) for breast cancer treatments to explore consistency in the use of utility scores in light of NICE methodological guidance. A review of all published breast cancer STAs was undertaken using all publicly available STA documents for each included assessment. Utility scores were assessed for consistency with NICE-preferred methods and original data sources. Furthermore, academic assessment group work undertaken during the STA process was examined to evaluate the emphasis of NICE-preferred quality-of-life measurement methods. Twelve breast cancer STAs were identified, and many STAs used evidence that did not follow NICE's preferred utility score measurement methods. Recent STA submissions show companies using EQ-5D and mapping. Academic assessment groups rarely emphasized NICE-preferred methods, and queries about preferred methods were rare. While there appears to be a trend in recent STA submissions towards following NICE methodological guidance, historically STA guidance in breast cancer has generally not used NICE's preferred methods. Future STAs in breast cancer and reviews of older guidance should ensure that utility measurement methods are consistent with the NICE reference case to help produce consistent, equitable decision making.

  17. Method for producing and regenerating a synthetic CO.sub.2 acceptor

    DOEpatents

    Lancet, Michael S [Pittsburgh, PA; Curran, George P [Pittsburgh, PA; Gorin, Everett [San Rafael, CA

    1982-01-01

    A method for producing a synthetic CO.sub.2 acceptor by feeding a mixture of finely divided silica and at least one finely divided calcium compound selected from the group consisting of calcium oxide and calcium carbonate to a fluidized bed; operating the fluidized bed at suitable conditions to produce pellets of synthetic CO.sub.2 acceptor and recovering the pellets of synthetic CO.sub.2 acceptor from the fluidized bed. Optionally, spent synthetic CO.sub.2 acceptor can be charged to the fluidized bed to produce regenerated pellets of synthetic CO.sub.2 acceptor.

  18. Method for producing and regenerating a synthetic CO[sub 2] acceptor

    DOEpatents

    Lancet, M. S.; Curran, G. P.; Gorin, E.

    1982-05-18

    A method is described for producing a synthetic CO[sub 2] acceptor by feeding a mixture of finely divided silica and at least one finely divided calcium compound selected from the group consisting of calcium oxide and calcium carbonate to a fluidized bed; operating the fluidized bed at suitable conditions to produce pellets of synthetic CO[sub 2] acceptor and recovering the pellets of synthetic CO[sub 2] acceptor from the fluidized bed. Optionally, spent synthetic CO[sub 2] acceptor can be charged to the fluidized bed to produce regenerated pellets of synthetic CO[sub 2] acceptor. 1 fig.

  19. 14 CFR 27.605 - Fabrication methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fabrication methods. 27.605 Section 27.605... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process (such as...

  20. 14 CFR 25.605 - Fabrication methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fabrication methods. 25.605 Section 25.605... STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction General § 25.605 Fabrication methods. (a) The methods of fabrication used must produce a consistently sound structure. If a fabrication process...

  1. 14 CFR 27.605 - Fabrication methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fabrication methods. 27.605 Section 27.605... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process (such as...

  2. 14 CFR 27.605 - Fabrication methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fabrication methods. 27.605 Section 27.605... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process (such as...

  3. 14 CFR 25.605 - Fabrication methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fabrication methods. 25.605 Section 25.605... STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction General § 25.605 Fabrication methods. (a) The methods of fabrication used must produce a consistently sound structure. If a fabrication process...

  4. 14 CFR 25.605 - Fabrication methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fabrication methods. 25.605 Section 25.605... STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction General § 25.605 Fabrication methods. (a) The methods of fabrication used must produce a consistently sound structure. If a fabrication process...

  5. 14 CFR 27.605 - Fabrication methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fabrication methods. 27.605 Section 27.605... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process (such as...

  6. 14 CFR 25.605 - Fabrication methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fabrication methods. 25.605 Section 25.605... STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction General § 25.605 Fabrication methods. (a) The methods of fabrication used must produce a consistently sound structure. If a fabrication process...

  7. 14 CFR 27.605 - Fabrication methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fabrication methods. 27.605 Section 27.605... STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction General § 27.605 Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a fabrication process (such as...

  8. 14 CFR 25.605 - Fabrication methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fabrication methods. 25.605 Section 25.605... STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction General § 25.605 Fabrication methods. (a) The methods of fabrication used must produce a consistently sound structure. If a fabrication process...

  9. Development of cost-effective VARTM technology for repair and hardening design method and specifications for ALDOT contractor : phase 3.

    DOT National Transportation Integrated Search

    2013-04-01

    Resin infusion, a method of fabricating fiber reinforced polymer (FRP), has been shown to produce a stronger FRP of : more consistent quality than other methods. It is a preferred method of fabrication in industries like automotive, : aerospace, and ...

  10. 14 CFR 23.605 - Fabrication methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fabrication methods. 23.605 Section 23.605... Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a... fabrication method must be substantiated by a test program. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30 FR...

  11. 14 CFR 23.605 - Fabrication methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fabrication methods. 23.605 Section 23.605... Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a... fabrication method must be substantiated by a test program. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30 FR...

  12. 14 CFR 23.605 - Fabrication methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fabrication methods. 23.605 Section 23.605... Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a... fabrication method must be substantiated by a test program. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30 FR...

  13. 14 CFR 23.605 - Fabrication methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fabrication methods. 23.605 Section 23.605... Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a... fabrication method must be substantiated by a test program. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30 FR...

  14. 14 CFR 23.605 - Fabrication methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fabrication methods. 23.605 Section 23.605... Fabrication methods. (a) The methods of fabrication used must produce consistently sound structures. If a... fabrication method must be substantiated by a test program. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30 FR...

  15. Iron oxide and iron carbide particles produced by the polyol method

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Shimizu, R.; Kobayashi, Y.

    2016-12-01

    Iron oxide ( γ-Fe2O3) and iron carbide (Fe3C) particles were produced by the polyol method. Ferrocene, which was employed as an iron source, was decomposed in a mixture of 1,2-hexadecandiol, oleylamine, and 1-octadecene. Particles were characterized using Mössbauer spectroscopy, X-ray diffraction, and transmission electron microscopy. It was found that oleylamine acted as a capping reagent, leading to uniform-sized (12-16 nm) particles consisting of γ-Fe 2O3. On the other hand, 1-octadecene acted as a non-coordinating solvent and a carbon source, which led to particles consisting of Fe3C and α-Fe with various sizes.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less

  17. New sulphiding method for steel and cast iron parts

    NASA Astrophysics Data System (ADS)

    Tarelnyk, V.; Martsynkovskyy, V.; Gaponova, O.; Konoplianchenko, Ie; Dovzyk, M.; Tarelnyk, N.; Gorovoy, S.

    2017-08-01

    A new method for sulphiding steel and cast iron part surfaces by electroerosion alloying (EEA) with the use of a special electrode is proposed, which method is characterized in that while manufacturing the electrode, on its surface, in any known manner (punching, threading, pulling, etc.), there is formed at least a recess to be filled with sulfur as a consistent material, and then there is produced EEA by the obtained electrode without waiting for the consistent material to become dried.

  18. Context-specific metabolic networks are consistent with experiments.

    PubMed

    Becker, Scott A; Palsson, Bernhard O

    2008-05-16

    Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  19. Nonlinear scalar forcing based on a reaction analogy

    NASA Astrophysics Data System (ADS)

    Daniel, Don; Livescu, Daniel

    2017-11-01

    We present a novel reaction analogy (RA) based forcing method for generating stationary passive scalar fields in incompressible turbulence. The new method can produce more general scalar PDFs (e.g. double-delta) than current methods, while ensuring that scalar fields remain bounded, unlike existent forcing methodologies that can potentially violate naturally existing bounds. Such features are useful for generating initial fields in non-premixed combustion or for studying non-Gaussian scalar turbulence. The RA method mathematically models hypothetical chemical reactions that convert reactants in a mixed state back into its pure unmixed components. Various types of chemical reactions are formulated and the corresponding mathematical expressions derived. For large values of the scalar dissipation rate, the method produces statistically steady double-delta scalar PDFs. Gaussian scalar statistics are recovered for small values of the scalar dissipation rate. In contrast, classical forcing methods consistently produce unimodal Gaussian scalar fields. The ability of the new method to produce fully developed scalar fields is discussed using 2563, 5123, and 10243 periodic box simulations.

  20. Methods of refining natural oils and methods of producing fuel compositions

    DOEpatents

    Firth, Bruce E; Kirk, Sharon E; Gavaskar, Vasudeo S

    2015-11-04

    A method of refining a natural oil includes: (a) providing a feedstock that includes a natural oil; (b) reacting the feedstock in the presence of a metathesis catalyst to form a metathesized product that includes olefins and esters; (c) passivating residual metathesis catalyst with an agent selected from the group consisting of phosphorous acid, phosphinic acid, and a combination thereof; (d) separating the olefins in the metathesized product from the esters in the metathesized product; and (e) transesterifying the esters in the presence of an alcohol to form a transesterified product and/or hydrogenating the olefins to form a fully or partially saturated hydrogenated product. Methods for suppressing isomerization of olefin metathesis products produced in a metathesis reaction, and methods of producing fuel compositions are described.

  1. SPITZER SECONDARY ECLIPSE DEPTHS WITH MULTIPLE INTRAPIXEL SENSITIVITY CORRECTION METHODS OBSERVATIONS OF WASP-13b, WASP-15b, WASP-16b, WASP-62b, AND HAT-P-22b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilpatrick, Brian M.; Tucker, Gregory S.; Lewis, Nikole K.

    2017-01-01

    We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for themore » intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.« less

  2. Spitzer Secondary Eclipse Depths with Multiple Intrapixel Sensitivity Correction Methods Observations of WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.

    2017-01-01

    We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.

  3. Low-Temperature Catalytic Process To Produce Hydrocarbons From Sugars

    DOEpatents

    Cortright, Randy D.; Dumesic, James A.

    2005-11-15

    Disclosed is a method of producing hydrogen from oxygenated hydrocarbon reactants, such as methanol, glycerol, sugars (e.g. glucose and xylose), or sugar alcohols (e.g. sorbitol). The method takes place in the condensed liquid phase. The method includes the steps of reacting water and a water-soluble oxygenated hydrocarbon in the presence of a metal-containing catalyst. The catalyst contains a metal selected from the group consisting of Group VIIIB transitional metals, alloys thereof, and mixtures thereof. The disclosed method can be run at lower temperatures than those used in the conventional steam reforming of alkanes.

  4. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  5. Slow-rotation dynamic SPECT with a temporal second derivative constraint.

    PubMed

    Humphries, T; Celler, A; Trummer, M

    2011-08-01

    Dynamic tracer behavior in the human body arises as a result of continuous physiological processes. Hence, the change in tracer concentration within a region of interest (ROI) should follow a smooth curve. The authors propose a modification to an existing slow-rotation dynamic SPECT reconstruction algorithm (dSPECT) with the goal of improving the smoothness of time activity curves (TACs) and other properties of the reconstructed image. The new method, denoted d2EM, imposes a constraint on the second derivative (concavity) of the TAC in every voxel of the reconstructed image, allowing it to change sign at most once. Further constraints are enforced to prevent other nonphysical behaviors from arising. The new method is compared with dSPECT using digital phantom simulations and experimental dynamic 99mTc -DTPA renal SPECT data, to assess any improvement in image quality. In both phantom simulations and healthy volunteer experiments, the d2EM method provides smoother TACs than dSPECT, with more consistent shapes in regions with dynamic behavior. Magnitudes of TACs within an ROI still vary noticeably in both dSPECT and d2EM images, but also in images produced using an OSEM approach that reconstructs each time frame individually, based on much more complete projection data. TACs produced by averaging over a region are similar using either method, even for small ROIs. Results for experimental renal data show expected behavior in images produced by both methods, with d2EM providing somewhat smoother mean TACs and more consistent TAC shapes. The d2EM method is successful in improving the smoothness of time activity curves obtained from the reconstruction, as well as improving consistency of TAC shapes within ROIs.

  6. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  7. Method and apparatus for probing relative volume fractions

    DOEpatents

    Jandrasits, Walter G.; Kikta, Thomas J.

    1998-01-01

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.

  8. Method and apparatus for probing relative volume fractions

    DOEpatents

    Jandrasits, W.G.; Kikta, T.J.

    1998-03-17

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction. 9 figs.

  9. Method of producing .sup.67 Cu

    DOEpatents

    O'Brien, Jr., Harold A.; Barnes, John W.; Taylor, Wayne A.; Thomas, Kenneth E.; Bentley, Glenn E.

    1984-01-01

    A method of producing carrier-free .sup.67 Cu by proton spallation combined with subsequent chemical separation and purification is disclosed. A target consisting essentially of pressed zinc oxide is irradiated with a high energy, high current proton beam to produce a variety of spallogenic nuclides, including .sup.67 Cu and other copper isotopes. The irradiated target is dissolved in a concentrated acid solution to which a palladium salt is added. In accordance with the preferred method, the spallogenic copper is twice coprecipitated with palladium, once with metallic zinc as the precipitating agent and once with hydrogen sulfide as the precipitating agent. The palladium/copper precipitate is then dissolved in an acid solution and the copper is separated from the palladium by liquid chromatography on an anion exchange resin.

  10. Method for producing /sup 67/Cu

    DOEpatents

    O'Brien, H.A. Jr.; Barnes, J.W.; Taylor, W.A.; Thomas, K.E.; Bentley, G.E.

    A method of producing carrier-free /sup 67/Cu by proton spallation combined with subsequent chemical separation and purification is disclosed. A target consisting essentially of pressed zinc oxide is irradiated with a high energy, high current proton beam to produce a variety of spallogenic nuclides, including /sup 67/Cu and other copper isotopes. The irradiated target is dissolved in a concentrated acid solution to which a palladium salt is added. In accordance with the preferred method, the spallogenic copper is twice coprecipitated with palladium, once with metallic zinc as the precipitating agent and once with hydrogen sulfide as the precipitating agent. The palladium/copper precipitate is then dissolved in an acid solution and the copper is separated from the palladium by liquid chromatography on an anion exchange resin.

  11. Synthetic CO.sub.2 acceptor

    DOEpatents

    Lancet, Michael S.; Curran, George P.

    1981-08-18

    A synthetic CO.sub.2 acceptor consisting essentially of at least one compound selected from the group consisting of calcium oxide and calcium carbonate supported in a refractory carrier matrix, the carrier having the general formula Ca.sub.5 (SiO.sub.4).sub.2 CO.sub.3. A method for producing the synthetic CO.sub.2 acceptor is also disclosed.

  12. Europium-activated phosphors containing oxides of rare-earth and group-IIIB metals and method of making the same

    DOEpatents

    Comanzo, Holly Ann; Setlur, Anant Achyut; Srivastava, Alok Mani

    2006-04-04

    Europium-activated phosphors comprise oxides of at least a rare-earth metal selected from the group consisting of gadolinium, yttrium, lanthanum, and combinations thereof and at least a Group-IIIB metal selected from the group consisting of aluminum, gallium, indium, and combinations thereof. A method for making such phosphors comprises adding at least a halide of at least one of the selected Group-IIIB metals in a starting mixture. The method further comprises firing the starting mixture in an oxygen-containing atmosphere. The phosphors produced by such a method exhibit improved absorption in the UV wavelength range and improved quantum efficiency.

  13. Europium-activated phosphors containing oxides of rare-earth and group-IIIB metals and method of making the same

    DOEpatents

    Comanzo, Holly Ann; Setlur, Anant Achyut; Srivastava, Alok Mani; Manivannan, Venkatesan

    2004-07-13

    Europium-activated phosphors comprise oxides of at least a rare-earth metal selected from the group consisting of gadolinium, yttrium, lanthanum, and combinations thereof and at least a Group-IIIB metal selected from the group consisting of aluminum, gallium, indium, and combinations thereof. A method for making such phosphors comprises adding at least a halide of at least one of the selected Group-IIIB metals in a starting mixture. The method further comprises firing the starting mixture in an oxygen-containing atmosphere. The phosphors produced by such a method exhibit improved absorption in the UV wavelength range and improved quantum efficiency.

  14. Multiphase flows of N immiscible incompressible fluids: A reduction-consistent and thermodynamically-consistent formulation and associated algorithm

    NASA Astrophysics Data System (ADS)

    Dong, S.

    2018-05-01

    We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.

  15. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  16. Evaluation of isolation methods for bacterial RNA quantitation in Dickeya dadantii

    USDA-ARS?s Scientific Manuscript database

    Dickeya dadantii is a difficult source for RNA of a sufficient quality for real-time qRT-PCR analysis of gene expression. Three RNA isolation methods were evaluated for their ability to produce high-quality RNA from this bacterium. Bacterial lysis with Trizol using standard protocols consistently ga...

  17. Inter-printer color calibration using constrained printer gamut

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Humet, Jacint

    2005-01-01

    Due to the drop size variation of the print heads in inkjet printers, consistent color reproduction becomes challenge for high quality color printing. To improve the color consistency, we developed a method and system to characterize a pair of printers using a colorimeter or a color scanner. Different from prior known approaches that simply try to match colors of one printer to the other without considering the gamut differences, we first constructed an overlapped gamut in which colors can be produced by both printers, and then characterized both printers using a pair of 3-D or 4-D lookup tables (LUT) to produce same colors limited to the overlapped gamut. Each LUT converts nominal device color values into engine-dependent device color values limited to the overlapped gamut. Compared to traditional approaches, the color calibration accuracy is significantly improved. This method can be simply extended to calibrate more than two engines. In a color imaging system that includes a scanner and more than one print engine, this method improves the color consistency very effectively without increasing hardware costs. A few examples for applying this method are: 1) one-pass bi-directional inkjet printing; 2) a printer with two or more sets of pens for printing; and 3) a system embedded with a pair of printers (the number of printers could be easily incremented).

  18. Method oil shale pollutant sorption/NO.sub.x reburning multi-pollutant control

    DOEpatents

    Boardman, Richard D [Idaho Falls, ID; Carrington, Robert A [Idaho Falls, ID

    2008-06-10

    A method of decreasing pollutants produced in a combustion process. The method comprises combusting coal in a combustion chamber to produce at least one pollutant selected from the group consisting of a nitrogen-containing pollutant, sulfuric acid, sulfur trioxide, carbonyl sulfide, carbon disulfide, chlorine, hydroiodic acid, iodine, hydrofluoric acid, fluorine, hydrobromic acid, bromine, phosphoric acid, phosphorous pentaoxide, elemental mercury, and mercuric chloride. Oil shale particles are introduced into the combustion chamber and are combusted to produce sorbent particulates and a reductant. The at least one pollutant is contacted with at least one of the sorbent particulates and the reductant to decrease an amount of the at least one pollutant in the combustion chamber. The reductant may chemically reduce the at least one pollutant to a benign species. The sorbent particulates may adsorb or absorb the at least one pollutant. A combustion chamber that produces decreased pollutants in a combustion process is also disclosed.

  19. The Effect of Image Quality, Repeated Study, and Assessment Method on Anatomy Learning

    ERIC Educational Resources Information Center

    Fenesi, Barbara; Mackinnon, Chelsea; Cheng, Lucia; Kim, Joseph A.; Wainman, Bruce C.

    2017-01-01

    The use of two-dimensional (2D) images is consistently used to prepare anatomy students for handling real specimen. This study examined whether the quality of 2D images is a critical component in anatomy learning. The visual clarity and consistency of 2D anatomical images was systematically manipulated to produce low-quality and high-quality…

  20. Faecal indicator bacteria enumeration in beach sand: a comparison study of extraction methods in medium to coarse sands.

    PubMed

    Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B

    2009-11-01

    The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.

  1. Nuclear reactor target assemblies, nuclear reactor configurations, and methods for producing isotopes, modifying materials within target material, and/or characterizing material within a target material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, James J.; Wall, Donald; Wittman, Richard S.

    Target assemblies are provided that can include a uranium-comprising annulus. The assemblies can include target material consisting essentially of non-uranium material within the volume of the annulus. Reactors are disclosed that can include one or more discrete zones configured to receive target material. At least one uranium-comprising annulus can be within one or more of the zones. Methods for producing isotopes within target material are also disclosed, with the methods including providing neutrons to target material within a uranium-comprising annulus. Methods for modifying materials within target material are disclosed as well as are methods for characterizing material within a targetmore » material.« less

  2. Method and apparatus for probing relative volume fractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jandrasits, W.G.; Kikta, T.J.

    1996-12-31

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining there between a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirelymore » of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.« less

  3. Validation of a Sulfuric Acid Digestion Method for Inductively Coupled Plasma Mass Spectrometry Quantification of TiO2 Nanoparticles.

    PubMed

    Watkins, Preston S; Castellon, Benjamin T; Tseng, Chiyen; Wright, Moncie V; Matson, Cole W; Cobb, George P

    2018-04-13

    A consistent analytical method incorporating sulfuric acid (H 2 SO 4 ) digestion and ICP-MS quantification has been developed for TiO 2 quantification in biotic and abiotic environmentally relevant matrices. Sample digestion in H 2 SO 4 at 110°C provided consistent results without using hydrofluoric acid or microwave digestion. Analysis of seven replicate samples for four matrices on each of 3 days produced Ti recoveries of 97% ± 2.5%, 91 % ± 4.0%, 94% ± 1.8%, and 73 % ± 2.6% (mean ± standard deviation) from water, fish tissue, periphyton, and sediment, respectively. The method demonstrated consistent performance in analysis of water collected over a 1 month.

  4. Organic solid state optical switches and method for producing organic solid state optical switches

    DOEpatents

    Wasielewski, M.R.; Gaines, G.L.; Niemczyk, M.P.; Johnson, D.G.; Gosztola, D.J.; O`Neil, M.P.

    1993-01-01

    This invention consists of a light-intensity dependent molecular switch comprised of a compound which shuttles an electron or a plurality of electrons from a plurality of electron donors to an electron acceptor upon being stimulated with light of predetermined wavelengths, and a method for making said compound.

  5. [Development of antibody medicines by bio-venture: lesson from license negotiations with mega pharmacies].

    PubMed

    Takada, Kenzo

    2013-01-01

    The current method of antibody production is mainly the hybridoma method, in which mice are immunized with an excess amount of antigen for a short period to promote activation and proliferation of B-lymphocytes producing the antibodies of interest. Because of the excess antigen, those producing low-affinity antibodies are activated. In contrast, human blood B-lymphocytes are activated through natural immune reactions, such as the reaction to infection. B-lymphocytes are stimulated repeatedly with a small amount of antigen, and thus only those producing high-affinity antibodies are activated. Consequently, the lymphocytes producing the high-affinity antibodies are accumulated in human blood. Therefore, human lymphocytes are an excellent source of high-affinity antibodies. Evec, Inc. has established a unique method to produce high-affinity antibodies from human lymphocytes using Epstein-Barr virus (EBV), which induces the proliferation of B-lymphocytes. The method first induces the proliferation of B-lymphocytes from human blood using EBV, and then isolates those producing the antibodies of interest. The key features of the Evec technique are: 1) development of a lymphocyte library consisting of 150 donors' lymphocytes from which donors suited to develop the antibodies of interest can be selected in 4 days; and 2) development of a sorting method and cell microarray method for selecting lymphocyte clones producing the target antibodies. Licensing agreements have been concluded with European and Japanese pharmaceutical companies for two types of antibody. This paper describes Evec's antibody technology and experience in license negotiations with Mega Pharmacies.

  6. Comparison of RNA Isolation Methods From Insect Larvae

    PubMed Central

    Ridgeway, J. A.; Timm, A. E.

    2014-01-01

    Abstract Isolating RNA from insects is becoming increasingly important in molecular entomology. Four methods including three commercial kits RNeasy Mini Kit (Qiagen), SV Total RNA isolation system (Promega), TRIzol reagent (Invitrogen), and a cetyl trimethylammonium bromide (CTAB)-based method were compared regarding their ability to isolate RNA from whole-body larvae of Thaumatotibia leucotreta (Meyrick), Thanatophilus micans (F.), Plutella xylostella (L.), and Tenebrio molitor (L.). A difference was observed among the four methods regarding RNA quality but not quantity. However, RNA quality and quantity obtained was not dependent on the insect species. The CTAB-based method produced low-quality RNA and the Trizol reagent produced partially degraded RNA, whereas the RNeasy Mini Kit and SV Total RNA isolation system produced RNA of consistently high quality. However, after reverse transcription to cDNA, RNA produced using all four extraction methods could be used to successfully amplify a 708 bp fragment of the cytochrome oxidase I gene. Of the four methods, the SV Total RNA isolation system showed the least amount of DNA contamination with the highest RNA integrity number and is thus recommended for stringent applications where high-quality RNA is required. This is the first comparison of RNA isolation methods among different insect species and the first to compare RNA isolation methods in insects in the last 20 years. PMID:25527580

  7. Faecal indicator bacteria enumeration in beach sand: A comparison study of extraction methods in medium to coarse sands

    USGS Publications Warehouse

    Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.

    2009-01-01

    Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.

  8. Advanced composite elevator for Boeing 727 aircraft, volume 2

    NASA Technical Reports Server (NTRS)

    Chovil, D. V.; Grant, W. D.; Jamison, E. S.; Syder, H.; Desper, O. E.; Harvey, S. T.; Mccarty, J. E.

    1980-01-01

    Preliminary design activity consisted of developing and analyzing alternate design concepts and selecting the optimum elevator configuration. This included trade studies in which durability, inspectability, producibility, repairability, and customer acceptance were evaluated. Preliminary development efforts consisted of evaluating and selecting material, identifying ancillary structural development test requirements, and defining full scale ground and flight test requirements necessary to obtain Federal Aviation Administration (FAA) certification. After selection of the optimum elevator configuration, detail design was begun and included basic configuration design improvements resulting from manufacturing verification hardware, the ancillary test program, weight analysis, and structural analysis. Detail and assembly tools were designed and fabricated to support a full-scope production program, rather than a limited run. The producibility development programs were used to verify tooling approaches, fabrication processes, and inspection methods for the production mode. Quality parts were readily fabricated and assembled with a minimum rejection rate, using prior inspection methods.

  9. Multi-view 3D echocardiography compounding based on feature consistency

    NASA Astrophysics Data System (ADS)

    Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.

    2011-09-01

    Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.

  10. SLEPR: A Sample-Level Enrichment-Based Pathway Ranking Method — Seeking Biological Themes through Pathway-Level Consistency

    PubMed Central

    Yi, Ming; Stephens, Robert M.

    2008-01-01

    Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771

  11. Copper-silver-titanium filler metal for direct brazing of structural ceramics

    DOEpatents

    Moorhead, Arthur J.

    1987-01-01

    A method of joining ceramics and metals to themselves and to one another is described using a brazing filler metal consisting essentially of 35 to 50 atomic percent copper, 15 to 50 atomic percent silver and 10 to 45 atomic percent titanium. This method produces strong joints that can withstand high service temperatures and oxidizing environments.

  12. National Elevation Dataset

    USGS Publications Warehouse

    ,

    2002-01-01

    The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.

  13. A new method for eliciting three speaking styles in the laboratory

    PubMed Central

    Harnsberger, James D.; Wright, Richard; Pisoni, David B.

    2009-01-01

    In this study, a method was developed to elicit three different speaking styles, reduced, citation, and hyperarticulated, using controlled sentence materials in a laboratory setting. In the first set of experiments, the reduced style was elicited by having twelve talkers read a sentence while carrying out a distractor task that involved recalling from short-term memory an individually-calibrated number of digits. The citation style corresponded to read speech in the laboratory. The hyperarticulated style was elicited by prompting talkers (twice) to reread the sentences more carefully. The results of perceptual tests with naïve listeners and an acoustic analysis showed that six of the twelve talkers produced a reduced style of speech for the test sentences in the distractor task relative to the same sentences in the citation style condition. In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. In the second set of experiments, the reduced style was elicited by increasing the number of digits in the distractor task by one (a heavier cognitive load). The procedures for eliciting citation and hyperarticulated sentences remained unchanged. Ten talkers were recorded in the second experiment. The results showed that six out of ten talkers differentiated all three styles as predicted (70% of all sentences recorded). In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. Overall, the results demonstrate that it is possible to elicit controlled sentence stimulus materials varying in speaking style in a laboratory setting, although the method requires further refinement to elicit these styles more consistently from individual participants. PMID:19562041

  14. A new method for eliciting three speaking styles in the laboratory.

    PubMed

    Harnsberger, James D; Wright, Richard; Pisoni, David B

    2008-04-01

    In this study, a method was developed to elicit three different speaking styles, reduced, citation, and hyperarticulated, using controlled sentence materials in a laboratory setting. In the first set of experiments, the reduced style was elicited by having twelve talkers read a sentence while carrying out a distractor task that involved recalling from short-term memory an individually-calibrated number of digits. The citation style corresponded to read speech in the laboratory. The hyperarticulated style was elicited by prompting talkers (twice) to reread the sentences more carefully. The results of perceptual tests with naïve listeners and an acoustic analysis showed that six of the twelve talkers produced a reduced style of speech for the test sentences in the distractor task relative to the same sentences in the citation style condition. In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. In the second set of experiments, the reduced style was elicited by increasing the number of digits in the distractor task by one (a heavier cognitive load). The procedures for eliciting citation and hyperarticulated sentences remained unchanged. Ten talkers were recorded in the second experiment. The results showed that six out of ten talkers differentiated all three styles as predicted (70% of all sentences recorded). In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. Overall, the results demonstrate that it is possible to elicit controlled sentence stimulus materials varying in speaking style in a laboratory setting, although the method requires further refinement to elicit these styles more consistently from individual participants.

  15. Logic circuits composed of flexible carbon nanotube thin-film transistor and ultra-thin polymer gate dielectric

    PubMed Central

    Lee, Dongil; Yoon, Jinsu; Lee, Juhee; Lee, Byung-Hyun; Seol, Myeong-Lok; Bae, Hagyoul; Jeon, Seung-Bae; Seong, Hyejeong; Im, Sung Gap; Choi, Sung-Jin; Choi, Yang-Kyu

    2016-01-01

    Printing electronics has become increasingly prominent in the field of electronic engineering because this method is highly efficient at producing flexible, low-cost and large-scale thin-film transistors. However, TFTs are typically constructed with rigid insulating layers consisting of oxides and nitrides that are brittle and require high processing temperatures, which can cause a number of problems when used in printed flexible TFTs. In this study, we address these issues and demonstrate a method of producing inkjet-printed TFTs that include an ultra-thin polymeric dielectric layer produced by initiated chemical vapor deposition (iCVD) at room temperature and highly purified 99.9% semiconducting carbon nanotubes. Our integrated approach enables the production of flexible logic circuits consisting of CNT-TFTs on a polyethersulfone (PES) substrate that have a high mobility (up to 9.76 cm2 V−1 sec−1), a low operating voltage (less than 4 V), a high current on/off ratio (3 × 104), and a total device yield of 90%. Thus, it should be emphasized that this study delineates a guideline for the feasibility of producing flexible CNT-TFT logic circuits with high performance based on a low-cost and simple fabrication process. PMID:27184121

  16. Logic circuits composed of flexible carbon nanotube thin-film transistor and ultra-thin polymer gate dielectric

    NASA Astrophysics Data System (ADS)

    Lee, Dongil; Yoon, Jinsu; Lee, Juhee; Lee, Byung-Hyun; Seol, Myeong-Lok; Bae, Hagyoul; Jeon, Seung-Bae; Seong, Hyejeong; Im, Sung Gap; Choi, Sung-Jin; Choi, Yang-Kyu

    2016-05-01

    Printing electronics has become increasingly prominent in the field of electronic engineering because this method is highly efficient at producing flexible, low-cost and large-scale thin-film transistors. However, TFTs are typically constructed with rigid insulating layers consisting of oxides and nitrides that are brittle and require high processing temperatures, which can cause a number of problems when used in printed flexible TFTs. In this study, we address these issues and demonstrate a method of producing inkjet-printed TFTs that include an ultra-thin polymeric dielectric layer produced by initiated chemical vapor deposition (iCVD) at room temperature and highly purified 99.9% semiconducting carbon nanotubes. Our integrated approach enables the production of flexible logic circuits consisting of CNT-TFTs on a polyethersulfone (PES) substrate that have a high mobility (up to 9.76 cm2 V-1 sec-1), a low operating voltage (less than 4 V), a high current on/off ratio (3 × 104), and a total device yield of 90%. Thus, it should be emphasized that this study delineates a guideline for the feasibility of producing flexible CNT-TFT logic circuits with high performance based on a low-cost and simple fabrication process.

  17. The development of a method of producing etch resistant wax patterns on solar cells

    NASA Technical Reports Server (NTRS)

    Pastirik, E.

    1980-01-01

    A potentially attractive technique for wax masking of solar cells prior to etching processes was studied. This technique made use of a reuseable wax composition which was applied to the solar cell in patterned form by means of a letterpress printing method. After standard wet etching was performed, wax removal by means of hot water was investigated. Application of the letterpress wax printing process to silicon was met with a number of difficulties. The most serious shortcoming of the process was its inability to produce consistently well-defined printed patterns on the hard silicon cell surface.

  18. Process for gasification using a synthetic CO.sub.2 acceptor

    DOEpatents

    Lancet, Michael S.; Curran, George P.

    1980-01-01

    A gasification process is disclosed using a synthetic CO.sub.2 acceptor consisting essentially of at least one compound selected from the group consisting of calcium oxide and calcium carbonate supported in a refractory carrier matrix, the carrier having the general formula Ca.sub.5 (SiO.sub.4).sub.2 CO.sub.3. A method for producing the synthetic CO.sub.2 acceptor is also disclosed.

  19. An Innovative Method for Obtaining Consistent Images and Quantification of Histochemically Stained Specimens

    PubMed Central

    Sedgewick, Gerald J.; Ericson, Marna

    2015-01-01

    Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568

  20. Community analysis of hydrogen-producing extreme thermophilic anaerobic microflora enriched from cow manure with five substrates.

    PubMed

    Yokoyama, Hiroshi; Moriya, Naoko; Ohmori, Hideyuki; Waki, Miyoko; Ogino, Akifumi; Tanaka, Yasuo

    2007-11-01

    The present study analyzed the community structures of anaerobic microflora producing hydrogen under extreme thermophilic conditions by two culture-independent methods: denaturing gradient gel electrophoresis (DGGE) and clone library analyses. Extreme thermophilic microflora (ETM) was enriched from cow manure by repeated batch cultures at 75 degrees C, using a substrate of xylose, glucose, lactose, cellobiose, or soluble starch, and produced hydrogen at yields of 0.56, 2.65, 2.17, 2.68, and 1.73 mol/mol-monosaccharide degraded, respectively. The results from the DGGE and clone library analyses were consistent and demonstrated that the community structures of ETM enriched with the four hexose-based substrates (glucose, lactose, cellobiose, and soluble starch) consisted of a single species, closely related to a hydrogen-producing extreme thermophile, Caldoanaerobacter subterraneus, with diversity at subspecies levels. The ETM enriched with xylose was more diverse than those enriched with the other substrates, and contained the bacterium related to C. subterraneus and an unclassified bacterium, distantly related to a xylan-degrading and hydrogen-producing extreme thermophile, Caloramator fervidus.

  1. Identification of hemolysin BL-producing Bacillus cereus isolates by a discontinuous hemolytic pattern in blood agar.

    PubMed Central

    Beecher, D J; Wong, A C

    1994-01-01

    Bacillus cereus causes distinct exotoxin-mediated diarrheal and emetic food poisoning syndromes and a variety of nongastrointestinal infections. Evidence is accumulating that hemolysin BL is a major B. cereus virulence factor. We describe two methods for detection of hemolysin BL in crude samples and on primary culture media. In the first method, the highly unusual discontinuous hemolysis pattern that is characteristic of pure hemolysin BL was produced in sheep and calf blood agar around wells filled with crude culture supernatant from hemolysin BL-producing strains. In the second method, the pattern was formed surrounding colonies of hemolysin BL-producing strains grown on media consisting of nutrient agar, 0.15 M NaCl, 2% calf serum, and sheep or calf blood. Hemolysin BL production was detected with these methods in 41 of 62 (66%) previously identified B. cereus isolates and in 46 of 136 (34%) presumptive B. cereus isolates from soil. All nine isolates tested that were associated with diarrhea or nongastrointestinal illness were positive for hemolysin BL. The methods presented here are specific, simple, inexpensive, and applicable to the screening of large numbers of samples or isolates. Images PMID:8017944

  2. Methods for Linking Item Parameters.

    DTIC Science & Technology

    1981-08-01

    within and across data sets; all proportion-correct distributions were quite platykurtic . Biserial item-total correlations had relatively consistent...would produce a distribution of a parameters which had a larger mean and standard deviation, was more positively skewed, and was somewhat more platykurtic

  3. ENCAPSULATED AEROSOLS

    DTIC Science & Technology

    materials determine the range of applicability of each method. A useful microencapsulation method, based on coagulation by inertial force was developed...The generation apparatus, consisting of two aerosol generators in series, was utilized to produce many kinds of microcapsules . A fluid energy mill...was found useful for the production of some microcapsules . The permeability of microcapsule films and the effect of exposure time and humidity were

  4. Methods matter: considering locomotory mode and respirometry technique when estimating metabolic rates of fishes

    PubMed Central

    Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.

    2016-01-01

    Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471

  5. A comprehensive evaluation of popular proteomics software workflows for label-free proteome quantification and imputation.

    PubMed

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2017-05-31

    Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.

  6. How reliable are methods to assess xylem vulnerability to cavitation? The issue of 'open vessel' artifact in oaks.

    PubMed

    Martin-StPaul, N K; Longepierre, D; Huc, R; Delzon, S; Burlett, R; Joffre, R; Rambal, S; Cochard, H

    2014-08-01

    Three methods are in widespread use to build vulnerability curves (VCs) to cavitation. The bench drying (BD) method is considered as a reference because embolism and xylem pressure are measured on large branches dehydrating in the air, in conditions similar to what happens in nature. Two other methods of embolism induction have been increasingly used. While the Cavitron (CA) uses centrifugal force to induce embolism, in the air injection (AI) method embolism is induced by forcing pressurized air to enter a stem segment. Recent studies have suggested that the AI and CA methods are inappropriate in long-vesselled species because they produce a very high-threshold xylem pressure for embolism (e.g., P50) compared with what is expected from (i) their ecophysiology in the field (native embolism, water potential and stomatal response to xylem pressure) and (ii) the P50 obtained with the BD method. However, other authors have argued that the CA and AI methods may be valid because they produce VCs similar to the BD method. In order to clarify this issue, we assessed VCs with the three above-mentioned methods on the long-vesselled Quercus ilex L. We showed that the BD VC yielded threshold xylem pressure for embolism consistent with in situ measurements of native embolism, minimal water potential and stomatal conductance. We therefore concluded that the BD method provides a reliable estimate of the VC for this species. The CA method produced a very high P50 (i.e., less negative) compared with the BD method, which is consistent with an artifact related to the vessel length. The VCs obtained with the AI method were highly variable, producing P50 ranging from -2 to -8.2 MPa. This wide variability was more related to differences in base diameter among samples than to differences in the length of samples. We concluded that this method is probably subject to an artifact linked to the distribution of vessel lengths within the sample. Overall, our results indicate that the CA and the AI should be used with extreme caution on long-vesselled species. Our results also highlight that several criteria may be helpful to assess the validity of a VC. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Fatigue loading history reconstruction based on the rain-flow technique

    NASA Technical Reports Server (NTRS)

    Khosrovaneh, A. K.; Dowling, N. E.

    1989-01-01

    Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.

  8. Sand consolidation method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, B.M.

    1965-10-05

    This is a new and improved sand consolidation method wherein an in-situ curing of a resinous fluid is undertaken. This method does not require that the resinous fluids be catalyzed at the surface of the well or well bore as is the case in previous methods. This method consists of, first, pumping an acid-curable consolidating fluid into the unconsolidated sand or earth formation and, secondly, pumping an oil overflush solution containing a halogenated organic or other organic acid or delayed acid-producing chemical. A small quantity of diesel oilspacer may be used between the plastic catalyst solution. The overflush functions tomore » remove permeability, and its acid or acid producing component promotes subsequent hardening of the remaining film of consolidating fluid. Trichloroacetic acid and benzotrichloride are satisfactory to add to the overflush solution for curing the resins. (17 claims)« less

  9. Method for producing nuclear fuel

    DOEpatents

    Haas, Paul A.

    1983-01-01

    Nuclear fuel is made by contacting an aqueous solution containing an actinide salt with an aqueous solution containing ammonium hydroxide, ammonium oxalate, or oxalic acid in an amount that will react with a fraction of the actinide salt to form a precipitate consisting of the hydroxide or oxalate of the actinide. A slurry consisting of the precipitate and solution containing the unreacted actinide salt is formed into drops which are gelled, calcined, and pressed to form pellets.

  10. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  11. Method for removing hydrocarbon contaminants from solid materials

    DOEpatents

    Bala, Gregory A.; Thomas, Charles P.

    1995-01-01

    A system for removing hydrocarbons from solid materials. Contaminated solids are combined with a solvent (preferably terpene based) to produce a mixture. The mixture is washed with water to generate a purified solid product (which is removed from the system) and a drainage product. The drainage product is separated into a first fraction (consisting mostly of contaminated solvent) and a second fraction (containing solids and water). The first fraction is separated into a third fraction (consisting mostly of contaminated solvent) and a fourth fraction (containing residual solids and water). The fourth fraction is combined with the second fraction to produce a sludge which is separated into a fifth fraction (containing water which is ultimately reused) and a sixth fraction (containing solids). The third fraction is then separated into a seventh fraction (consisting of recovered solvent which is ultimately reused) and an eighth fraction (containing hydrocarbon waste).

  12. Method for removing hydrocarbon contaminants from solid materials

    DOEpatents

    Bala, G.A.; Thomas, C.P.

    1995-10-03

    A system is described for removing hydrocarbons from solid materials. Contaminated solids are combined with a solvent (preferably terpene based) to produce a mixture. The mixture is washed with water to generate a purified solid product (which is removed from the system) and a drainage product. The drainage product is separated into a first fraction (consisting mostly of contaminated solvent) and a second fraction (containing solids and water). The first fraction is separated into a third fraction (consisting mostly of contaminated solvent) and a fourth fraction (containing residual solids and water). The fourth fraction is combined with the second fraction to produce a sludge which is separated into a fifth fraction (containing water which is ultimately reused) and a sixth fraction (containing solids). The third fraction is then separated into a seventh fraction (consisting of recovered solvent which is ultimately reused) and an eighth fraction (containing hydrocarbon waste). 4 figs.

  13. Psychological Flexibility, ACT, and Organizational Behavior

    ERIC Educational Resources Information Center

    Bond, Frank W.; Hayes, Steven C.; Barnes-Holmes, Dermot

    2006-01-01

    This paper offers organizational behavior management (OBM) a behavior analytically consistent way to expand its analysis of, and methods for changing, organizational behavior. It shows how Relational Frame Theory (RFT) suggests that common, problematic, psychological processes emerge from language itself, and they produce psychological…

  14. Application of the Taguchi Method for Optimizing the Process Parameters of Producing Lightweight Aggregates by Incorporating Tile Grinding Sludge with Reservoir Sediments

    PubMed Central

    Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei

    2017-01-01

    This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L16(45) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm3 and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments. PMID:29125576

  15. Application of the Taguchi Method for Optimizing the Process Parameters of Producing Lightweight Aggregates by Incorporating Tile Grinding Sludge with Reservoir Sediments.

    PubMed

    Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei

    2017-11-10

    This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L 16 (4⁵) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm³ and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments.

  16. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trujillo, Christopher J.

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extractionmore » can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.« less

  17. Pre-treating water with non-thermal plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Young I.; Fridman, Alexander; Rabinovich, Alexander

    The present invention consists of a method of pre-treatment of adulterated water for distillation, including adulterated water produced during hydraulic fracturing ("fracking") of shale rock during natural gas drilling. In particular, the invention is directed to a method of treating adulterated water, said adulterated water having an initial level of bicarbonate ion in a range of about 250 ppm to about 5000 ppm and an initial level of calcium ion in a range of about 500 ppm to about 50,000 ppm, said method comprising contacting the adulterated water with a non-thermal arc discharge plasma to produce plasma treated water havingmore » a level of bicarbonate ion of less than about 100 ppm. Optionally, the plasma treated water may be further distilled.« less

  18. Li.sub.2 O-Al.sub.2 O.sub.3 -SiO.sub.2 glass ceramic-aluminum containing austenitic stainless steel composite body and a method of producing the same

    DOEpatents

    Cassidy, Roger T.

    1990-05-01

    The present invention relates to a hermetically sealed Li.sub.2 O-Al.sub.2 O.sub.3 -SiO.sub.2 glass ceramic-aluminum containing stainless steel composite body and a method of producing the body. The composite body includes an oxide interfacial region between the glass ceramic and metal, wherein the interfacial region consists essentially of an Al.sub.2 O.sub.3 layer. The interfacial Al.sub.2 O.sub.3 region includes constituents of both the metal and glass ceramic.

  19. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  20. Self-consistent modeling of laminar electrohydrodynamic plumes from ultra-sharp needles in cyclohexane

    NASA Astrophysics Data System (ADS)

    Becerra, Marley; Frid, Henrik; Vázquez, Pedro A.

    2017-12-01

    This paper presents a self-consistent model of electrohydrodynamic (EHD) laminar plumes produced by electron injection from ultra-sharp needle tips in cyclohexane. Since the density of electrons injected into the liquid is well described by the Fowler-Nordheim field emission theory, the injection law is not assumed. Furthermore, the generation of electrons in cyclohexane and their conversion into negative ions is included in the analysis. Detailed steady-state characteristics of EHD plumes under weak injection and space-charge limited injection are studied. It is found that the plume characteristics far from both electrodes and under weak injection can be accurately described with an asymptotic simplified solution proposed by Vazquez et al. ["Dynamics of electrohydrodynamic laminar plumes: Scaling analysis and integral model," Phys. Fluids 12, 2809 (2000)] when the correct longitudinal electric field distribution and liquid velocity radial profile are used as input. However, this asymptotic solution deviates from the self-consistently calculated plume parameters under space-charge limited injection since it neglects the radial variations of the electric field produced by a high-density charged core. In addition, no significant differences in the model estimates of the plume are found when the simulations are obtained either with the finite element method or with a diffusion-free particle method. It is shown that the model also enables the calculation of the current-voltage characteristic of EHD laminar plumes produced by electron field emission, with good agreement with measured values reported in the literature.

  1. Copper-silver-titanium-tin filler metal for direct brazing of structural ceramics

    DOEpatents

    Moorhead, Arthur J.

    1988-04-05

    A method of joining ceramics and metals to themselves and to one another at about 800.degree. C. is described using a brazing filler metal consisting essentially of 35 to 50 at. % copper, 40 to 50 at. % silver, 1 to 15 at. % titanium, and 2 to 8 at. % tin. This method produces strong joints that can withstand high service temperatures and oxidizing environments.

  2. Application of biocontrol agents in forest nurseries

    USDA-ARS?s Scientific Manuscript database

    Bare-root conifer seedling culture consists of growing seedlings (sown or transplanted) in soil, and is the predominant method for supplying America’s need for healthy regeneration stock to produce and sustain forests, wildlife food sources, fiber, wood products, paper, bio-pharmaceuticals and now p...

  3. Microwave-assisted synthesis of cyclodextrin polyurethanes

    USDA-ARS?s Scientific Manuscript database

    Cyclodextrin (CD) has often been incorporated into polyurethanes in order to facilitate its use in encapsulation or removal of organic species for various applications. In this work a microwave-assisted method has been developed to produce polyurethanes consisting of alpha-, ß-, and gamma-CD and thr...

  4. Secondary power-producing cell. [electrodes contain same two elements in different proportions

    DOEpatents

    Fischer, A.K.

    1971-10-26

    This cell consists of an anode and a cathode containing the same two elements in different proportions and an electrolyte which contains ions of the element which is to be transported through it. The electrodes consist of chromium, iron, lithium, sodium, cadmium, copper, or zinc and phosphorus, selenium, tellurium, sulfur, arsenic, or nitrogen. A method to heat the cathode in the regeneration cycle to transfer the electronegative component to the anode is provided. (RWR)

  5. Attributional Retraining and Elaborative Learning: Improving Academic Development through Writing-Based Interventions

    ERIC Educational Resources Information Center

    Hall, Nathan C.; Perry, Raymond P.; Goetz, Thomas; Ruthig, Joelle C.; Stupnisky, Robert H.; Newall, Nancy E.

    2007-01-01

    Attributional retraining (AR) is a motivational intervention that consistently produces improved performance by encouraging controllable failure attributions. Research suggests that cognitively engaging AR methods are ideal for high-elaborating students, whereas affect-oriented techniques are better for low-elaborating students. College students'…

  6. Characterization of small RNA populations in non-transgenic and aflatoxin-reducing-transformed peanut

    USDA-ARS?s Scientific Manuscript database

    Aflatoxins are powerful carcinogenic secondary metabolites produced mainly by Aspergillus flavus and A. parasiticus. These mycotoxins accumulate in crops and pose a serious risk to food safety and human health. No consistently effective method exists to control aflatoxins in crops. RNA interferen...

  7. A modular modulation method for achieving increases in metabolite production.

    PubMed

    Acerenza, Luis; Monzon, Pablo; Ortega, Fernando

    2015-01-01

    Increasing the production of overproducing strains represents a great challenge. Here, we develop a modular modulation method to determine the key steps for genetic manipulation to increase metabolite production. The method consists of three steps: (i) modularization of the metabolic network into two modules connected by linking metabolites, (ii) change in the activity of the modules using auxiliary rates producing or consuming the linking metabolites in appropriate proportions and (iii) determination of the key modules and steps to increase production. The mathematical formulation of the method in matrix form shows that it may be applied to metabolic networks of any structure and size, with reactions showing any kind of rate laws. The results are valid for any type of conservation relationships in the metabolite concentrations or interactions between modules. The activity of the module may, in principle, be changed by any large factor. The method may be applied recursively or combined with other methods devised to perform fine searches in smaller regions. In practice, it is implemented by integrating to the producer strain heterologous reactions or synthetic pathways producing or consuming the linking metabolites. The new procedure may contribute to develop metabolic engineering into a more systematic practice. © 2015 American Institute of Chemical Engineers.

  8. Effects of Synthesis Method on Electrical Properties of Graphene

    NASA Astrophysics Data System (ADS)

    Fuad, M. F. I. Ahmad; Jarni, H. H.; Shariffudin, W. N.; Othman, N. H.; Rahim, A. N. Che Abdul

    2018-05-01

    The aim of this study is to achieve the highest reduction capability and complete reductions of oxygen from graphene oxide (GO) by using different type of chemical methods. The modification of Hummer’s method has been proposed to produce GO, and hydrazine hydrate has been utilized in the GO’s reduction process into graphene. There are two types of chemical method are used to synthesize graphene; 1) Sina’s method and 2) Sasha’s method. Both GO and graphene were then characterized using X-Ray Powder Diffraction (XRD) and Fourier Transform Infrared Spectrometry (FT-IR). The graph patterns obtained from XRD showed that the values of graphene and GO are within their reliable ranges, FT-IR identified the comparison functional group between GO and graphene. Graphene was verified to experience the reduction process due to absent of functional group consist of oxygen has detected. Electrochemical impedance spectrometry (EIS) was then conducted to test the ability of conducting electricity of two batches (each weighted 1.6g) of graphene synthesized using different methods (Sina’s method and Sasha’s method). Sasha’s method was proven to have lower conductivity value compare to Sina’s method, with value of 6.2E+02 S/m and 8.1E+02 S/m respectively. These values show that both methods produced good graphene; however, by using Sina’s method, the graphene produced has better electrical properties.

  9. Fracture toughness of advanced ceramics at room temperature

    NASA Technical Reports Server (NTRS)

    Quinn, George D.; Salem, Jonathan; Bar-On, Isa; Cho, Kyu; Foley, Michael; Fang, HO

    1992-01-01

    Results of round-robin fracture toughness tests on advanced ceramics are reported. A gas-pressure silicon nitride and a zirconia-toughened alumina were tested using three test methods: indentation fracture, indentation strength, and single-edge precracked beam. The latter two methods have produced consistent results. The interpretation of fracture toughness test results for the zirconia alumina composite is shown to be complicated by R-curve and environmentally assisted crack growth phenomena.

  10. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  11. Decision-Tree Program

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1994-01-01

    IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.

  12. Method for forming a thermocouple

    DOEpatents

    Metz, Hugh J.

    1979-01-01

    A method is provided for producing a fast response, insulated junction thermocouple having a uniform diameter outer sheath in the region of the measuring junction. One step is added to the usual thermocouple fabrication process that consists in expanding the thermocouple sheath following the insulation removal step. This makes it possible to swage the sheath back to the original diameter and compact the insulation to the desired high density in the final fabrication step.

  13. Nucleation of Crystals From Solution in Microgravity (USML-1 Glovebox (GBX) Investigation)

    NASA Technical Reports Server (NTRS)

    Kroes, Roger L.; Reiss, Donald A.; Lehoczky, Sandor L.

    1994-01-01

    A new method for initiating nucleation from solutions in microgravity which avoids nucleation on container walls and other surfaces is described. This method consists of injecting a small quantity of highly concentrated, heated solution into the interior of a lightly supersaturated, cooler host gowth solution. It was tested successfully on USML-I, producing a large number of LAP crystals whose longest dimension averaged 1 mm.

  14. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  15. Results of the Round Robin on opening-load measurement conducted by ASTM Task Group E24.04.04 on Crack Closure Measurement and Analysis

    NASA Technical Reports Server (NTRS)

    Phillips, Edward P.

    1989-01-01

    An experimental Round Robin on the measurement of the opening load in fatigue crack growth tests was conducted on Crack Closure Measurement and Analysis. The Round Robin evaluated the current level of consistency of opening load measurements among laboratories and to identify causes for observed inconsistency. Eleven laboratories participated in the testing of compact and middle-crack specimens. Opening-load measurements were made for crack growth at two stress-intensity factor levels, three crack lengths, and following an overload. All opening-load measurements were based on the analysis of specimen compliance data. When all of the results reported (from all participants, all measurement methods, and all data analysis methods) for a given test condition were pooled, the range of opening loads was very large--typically spanning the lower half of the fatigue loading cycle. Part of the large scatter in the reported opening-load results was ascribed to consistent differences in results produced by the various methods used to measure specimen compliance and to evaluate the opening load from the compliance data. Another significant portion of the scatter was ascribed to lab-to-lab differences in producing the compliance data when using nominally the same method of measurement.

  16. Method for forming gold-containing catalyst with porous structure

    DOEpatents

    Biener, Juergen; Hamza, Alex V; Baeumer, Marcus; Schulz, Christian; Jurgens, Birte; Biener, Monika M.

    2014-07-22

    A method for forming a gold-containing catalyst with porous structure according to one embodiment of the present invention includes producing a starting alloy by melting together of gold and at least one less noble metal that is selected from the group consisting of silver, copper, rhodium, palladium, and platinum; and a dealloying step comprising at least partial removal of the less noble metal by dissolving the at least one less noble metal out of the starting alloy. Additional methods and products thereof are also presented.

  17. Methods to Collect, Compile, and Analyze Observed Short-lived Fission Product Gamma Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finn, Erin C.; Metz, Lori A.; Payne, Rosara F.

    2011-09-29

    A unique set of fission product gamma spectra was collected at short times (4 minutes to 1 week) on various fissionable materials. Gamma spectra were collected from the neutron-induced fission of uranium, neptunium, and plutonium isotopes at thermal, epithermal, fission spectrum, and 14-MeV neutron energies. This report describes the experimental methods used to produce and collect the gamma data, defines the experimental parameters for each method, and demonstrates the consistency of the measurements.

  18. Creation of an anti-imaging system using binary optics.

    PubMed

    Wang, Haifeng; Lin, Jian; Zhang, Dawei; Wang, Yang; Gu, Min; Urbach, H P; Gan, Fuxi; Zhuang, Songlin

    2016-09-13

    We present a concealing method in which an anti-point spread function (APSF) is generated using binary optics, which produces a large-scale dark area in the focal region that can hide any object located within it. This result is achieved by generating two identical PSFs of opposite signs, one consisting of positive electromagnetic waves from the zero-phase region of the binary optical element and the other consisting of negative electromagnetic waves from the pi-phase region of the binary optical element.

  19. Creation of an anti-imaging system using binary optics

    PubMed Central

    Wang, Haifeng; Lin, Jian; Zhang, Dawei; Wang, Yang; Gu, Min; Urbach, H. P.; Gan, Fuxi; Zhuang, Songlin

    2016-01-01

    We present a concealing method in which an anti-point spread function (APSF) is generated using binary optics, which produces a large-scale dark area in the focal region that can hide any object located within it. This result is achieved by generating two identical PSFs of opposite signs, one consisting of positive electromagnetic waves from the zero-phase region of the binary optical element and the other consisting of negative electromagnetic waves from the pi-phase region of the binary optical element. PMID:27620068

  20. Statewide Implementation of Evidence-Based Programs

    ERIC Educational Resources Information Center

    Fixsen, Dean; Blase, Karen; Metz, Allison; van Dyke, Melissa

    2013-01-01

    Evidence-based programs will be useful to the extent they produce benefits to individuals on a socially significant scale. It appears the combination of effective programs and effective implementation methods is required to assure consistent uses of programs and reliable benefits to children and families. To date, focus has been placed primarily…

  1. Effects of Phonetic Context on Relative Fundamental Frequency

    ERIC Educational Resources Information Center

    Lien, Yu-An S.; Gattuccio, Caitlin I.; Stepp, Cara E.

    2014-01-01

    Purpose: The effect of phonetic context on relative fundamental frequency (RFF) was examined, in order to develop stimuli sets with minimal within-speaker variability that can be implemented in future clinical protocols. Method: Sixteen speakers with healthy voices produced RFF stimuli. Uniform utterances consisted of 3 repetitions of the same…

  2. FINDING THE CENTER: AN ANALYSIS OF THE TILTED RING MODEL FITS TO THE INNER AND OUTER PARTS OF SIX DWARF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boisvert, John H.; Rhee, George

    2016-07-01

    We present a study of the H i emission of six dwarf galaxies. Profiles of dark matter halos of galaxies such as these have been the subject of much debate. In this paper we investigate the accuracy with which the dynamical center (the center of rotation) of each galaxy can be determined. We have used the tilted ring model. We find that the tilted ring method produces plausible centers that are consistent with other published works that used rings at radii larger than 1 kpc. At a radius of 1 kpc the method often converges on centers that do notmore » make sense, producing, for example, radial velocities for the galaxies that are inconsistent with the data. The only way to get the method to work in the centers of galaxies is to use prior information about the redshifts to rule out implausible centers. This suggests that the tilted ring ring method may not be producing reliable rotational velocities in the central kiloparsecs of dwarf galaxies.« less

  3. spa Typing and Multilocus Sequence Typing Show Comparable Performance in a Macroepidemiologic Study of Staphylococcus aureus in the United States

    PubMed Central

    O'Hara, F. Patrick; Suaya, Jose A.; Ray, G. Thomas; Baxter, Roger; Brown, Megan L.; Mera, Robertino M.; Close, Nicole M.; Thomas, Elizabeth

    2016-01-01

    A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants. PMID:26669861

  4. spa Typing and Multilocus Sequence Typing Show Comparable Performance in a Macroepidemiologic Study of Staphylococcus aureus in the United States.

    PubMed

    O'Hara, F Patrick; Suaya, Jose A; Ray, G Thomas; Baxter, Roger; Brown, Megan L; Mera, Robertino M; Close, Nicole M; Thomas, Elizabeth; Amrine-Madsen, Heather

    2016-01-01

    A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants.

  5. Measurement of Charged Pions from Neutrino-produced Nuclear Resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Clifford N.

    2014-01-01

    A method for identifying stopped pions in a high-resolution scintillator bar detector is presented. I apply my technique to measure the axial mass M Δ Afor production of the Δ(1232) resonance by neutrino, with the result M Δ A = 1.16±0.20 GeV (68% CL) (limited by statistics). The result is produced from the measured spectrum of reconstructed momentum-transfer Q 2. I proceed by varying the value of M Δ A in a Rein-Sehgal-based Monte Carlo to produce the best agreement, using shape only (not normalization). The consistency of this result with recent reanalyses of previous bubble-chamber experiments is discussed.

  6. Novel Bioreactor Platform for Scalable Cardiomyogenic Differentiation from Pluripotent Stem Cell-Derived Embryoid Bodies.

    PubMed

    Rungarunlert, Sasitorn; Ferreira, Joao N; Dinnyes, Andras

    2016-01-01

    Generation of cardiomyocytes from pluripotent stem cells (PSCs) is a common and valuable approach to produce large amount of cells for various applications, including assays and models for drug development, cell-based therapies, and tissue engineering. All these applications would benefit from a reliable bioreactor-based methodology to consistently generate homogenous PSC-derived embryoid bodies (EBs) at a large scale, which can further undergo cardiomyogenic differentiation. The goal of this chapter is to describe a scalable method to consistently generate large amount of homogeneous and synchronized EBs from PSCs. This method utilizes a slow-turning lateral vessel bioreactor to direct the EB formation and their subsequent cardiomyogenic lineage differentiation.

  7. Determining transport coefficients for a microscopic simulation of a hadron gas

    NASA Astrophysics Data System (ADS)

    Pratt, Scott; Baez, Alexander; Kim, Jane

    2017-02-01

    Quark-gluon plasmas produced in relativistic heavy-ion collisions quickly expand and cool, entering a phase consisting of multiple interacting hadronic resonances just below the QCD deconfinement temperature, T ˜155 MeV. Numerical microscopic simulations have emerged as the principal method for modeling the behavior of the hadronic stage of heavy-ion collisions, but the transport properties that characterize these simulations are not well understood. Methods are presented here for extracting the shear viscosity and two transport parameters that emerge in Israel-Stewart hydrodynamics. The analysis is based on studying how the stress-energy tensor responds to velocity gradients. Results are consistent with Kubo relations if viscous relaxation times are twice the collision time.

  8. Quantitative real-time PCR method with internal amplification control to quantify cyclopiazonic acid producing molds in foods.

    PubMed

    Rodríguez, Alicia; Werning, María L; Rodríguez, Mar; Bermúdez, Elena; Córdoba, Juan J

    2012-12-01

    A quantitative TaqMan real-time PCR (qPCR) method that includes an internal amplification control (IAC) to quantify cyclopiazonic acid (CPA)-producing molds in foods has been developed. A specific primer pair (dmaTF/dmaTR) and a TaqMan probe (dmaTp) were designed on the basis of dmaT gene which encodes the enzyme dimethylallyl tryptophan synthase involved in the biosynthesis of CPA. The IAC consisted of a 105 bp chimeric DNA fragment containing a region of the hly gene of Listeria monocytogenes. Thirty-two mold reference strains representing CPA producers and non-producers of different mold species were used in this study. All strains were tested for CPA production by high-performance liquid chromatography-mass spectrometry (HPLC-MS). The functionality of the designed qPCR method was demonstrated by the high linear relationship of the standard curves relating to the dmaT gene copy numbers and the Ct values obtained from the different CPA producers tested. The ability of the qPCR protocol to quantify CPA-producing molds was evaluated in different artificially inoculated foods. A good linear correlation was obtained over the range 1-4 log cfu/g in the different food matrices. The detection limit in all inoculated foods ranged from 1 to 2 log cfu/g. This qPCR protocol including an IAC showed good efficiency to quantify CPA-producing molds in naturally contaminated foods avoiding false negative results. This method could be used to monitor the CPA producers in the HACCP programs to prevent the risk of CPA formation throughout the food chain. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Robust rotational-velocity-Verlet integration methods.

    PubMed

    Rozmanov, Dmitri; Kusalik, Peter G

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  10. Robust rotational-velocity-Verlet integration methods

    NASA Astrophysics Data System (ADS)

    Rozmanov, Dmitri; Kusalik, Peter G.

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  11. Method of using deuterium-cluster foils for an intense pulsed neutron source

    DOEpatents

    Miley, George H.; Yang, Xiaoling

    2013-09-03

    A method is provided for producing neutrons, comprising: providing a converter foil comprising deuterium clusters; focusing a laser on the foil with power and energy sufficient to cause deuteron ions to separate from the foil; and striking a surface of a target with the deuteron ions from the converter foil with energy sufficient to cause neutron production by a reaction selected from the group consisting of D-D fusion, D-T fusion, D-metal nuclear spallation, and p-metal. A further method is provided for assembling a plurality of target assemblies for a target injector to be used in the previously mentioned manner. A further method is provided for producing neutrons, comprising: splitting a laser beam into a first beam and a second beam; striking a first surface of a target with the first beam, and an opposite second surface of the target with the second beam with energy sufficient to cause neutron production.

  12. Refining historical limits method to improve disease cluster detection, New York City, New York, USA.

    PubMed

    Levin-Rector, Alison; Wilson, Elisha L; Fine, Annie D; Greene, Sharon K

    2015-02-01

    Since the early 2000s, the Bureau of Communicable Disease of the New York City Department of Health and Mental Hygiene has analyzed reportable infectious disease data weekly by using the historical limits method to detect unusual clusters that could represent outbreaks. This method typically produced too many signals for each to be investigated with available resources while possibly failing to signal during true disease outbreaks. We made method refinements that improved the consistency of case inclusion criteria and accounted for data lags and trends and aberrations in historical data. During a 12-week period in 2013, we prospectively assessed these refinements using actual surveillance data. The refined method yielded 74 signals, a 45% decrease from what the original method would have produced. Fewer and less biased signals included a true citywide increase in legionellosis and a localized campylobacteriosis cluster subsequently linked to live-poultry markets. Future evaluations using simulated data could complement this descriptive assessment.

  13. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  14. Method to produce catalytically active nanocomposite coatings

    DOEpatents

    Erdemir, Ali; Eryilmaz, Osman Levent; Urgen, Mustafa; Kazmanli, Kursat

    2016-02-09

    A nanocomposite coating and method of making and using the coating. The nanocomposite coating is disposed on a base material, such as a metal or ceramic; and the nanocomposite consists essentially of a matrix of an alloy selected from the group of Cu, Ni, Pd, Pt and Re which are catalytically active for cracking of carbon bonds in oils and greases and a grain structure selected from the group of borides, carbides and nitrides.

  15. Effects of Dopant Metal Variation and Material Synthesis Method on the Material Properties of Mixed Metal Ferrites in Yttria Stabilized Zirconia for Solar Thermochemical Fuel Production

    DOE PAGES

    Leonard, Jeffrey; Reyes, Nichole; Allen, Kyle M.; ...

    2015-01-01

    Mixed metal ferrites have shown much promise in two-step solar-thermochemical fuel production. Previous work has typically focused on evaluating a particular metal ferrite produced by a particular synthesis process, which makes comparisons between studies performed by independent researchers difficult. A comparative study was undertaken to explore the effects different synthesis methods have on the performance of a particular material during redox cycling using thermogravimetry. This study revealed that materials made via wet chemistry methods and extended periods of high temperature calcination yield better redox performance. Differences in redox performance between materials made via wet chemistry methods were minimal and thesemore » demonstrated much better performance than those synthesized via the solid state method. Subsequently, various metal ferrite samples (NiFe 2 O 4 , MgFe 2 O 4 , CoFe 2 O 4 , and MnFe 2 O 4 ) in yttria stabilized zirconia (8YSZ) were synthesized via coprecipitation and tested to determine the most promising metal ferrite combination. It was determined that 10 wt.% CoFe 2 O 4 in 8YSZ produced the highest and most consistent yields of O 2 and CO. By testing the effects of synthesis methods and dopants in a consistent fashion, those aspects of ferrite preparation which are most significant can be revealed. More importantly, these insights can guide future efforts in developing the next generation of thermochemical fuel production materials.« less

  16. Comparison of heat-testing methodology.

    PubMed

    Bierma, Mark M; McClanahan, Scott; Baisden, Michael K; Bowles, Walter R

    2012-08-01

    Patients with irreversible pulpitis occasionally present with a chief complaint of sensitivity to heat. To appropriately diagnose the offending tooth, a variety of techniques have been developed to reproduce this chief complaint. Such techniques cause temperature increases that are potentially damaging to the pulp. Newer electronic instruments control the temperature of a heat-testing tip that is placed directly against a tooth. The aim of this study was to determine which method produced the most consistent and safe temperature increase within the pulp. This consistency facilitates the clinician's ability to differentiate between a normal pulp and irreversible pulpitis. Four operators applied the following methods to each of 4 extracted maxillary premolars (for a total of 16 trials per method): heated gutta-percha, heated ball burnisher, hot water, and a System B unit or Elements unit with a heat-testing tip. Each test was performed for 60 seconds, and the temperatures were recorded via a thermocouple in the pulp chamber. Analysis of the data was performed by using the intraclass correlation coefficient. The least consistent warming was found with hot water. The heat-testing tip also demonstrated greater consistency between operators compared with the other methods. Hot water and the heated ball burnisher caused temperature increases high enough to damage pulp tissue. The Elements unit with a heat-testing tip provides the most consistent warming of the dental pulp. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  17. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  18. Three-step interferometric method with blind phase shifts by use of interframe correlation between interferograms

    NASA Astrophysics Data System (ADS)

    Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.

    2018-06-01

    A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.

  19. Smelting Magnesium Metal using a Microwave Pidgeon Method

    PubMed Central

    Wada, Yuji; Fujii, Satoshi; Suzuki, Eiichi; Maitani, Masato M.; Tsubaki, Shuntaro; Chonan, Satoshi; Fukui, Miho; Inazu, Naomi

    2017-01-01

    Magnesium (Mg) is a lightweight metal with applications in transportation and sustainable battery technologies, but its current production through ore reduction using the conventional Pidgeon process emits large amounts of CO2 and particulate matter (PM2.5). In this work, a novel Pidgeon process driven by microwaves has been developed to produce Mg metal with less energy consumption and no direct CO2 emission. An antenna structure consisting of dolomite as the Mg source and a ferrosilicon antenna as the reducing material was used to confine microwave energy emitted from a magnetron installed in a microwave oven to produce a practical amount of pure Mg metal. This microwave Pidgeon process with an antenna configuration made it possible to produce Mg with an energy consumption of 58.6 GJ/t, corresponding to a 68.6% reduction when compared to the conventional method. PMID:28401910

  20. Carbon nanotube: the inside story.

    PubMed

    Ando, Yoshinori

    2010-06-01

    Carbon nanotubes (CNTs) were serendipitously discovered as a byproduct of fullerenes by direct current (DC) arc discharge; and today this is the most-wanted material in the nanotechnology research. In this brief review, I begin with the history of the discovery of CNTs and focus on CNTs produced by arc discharge in hydrogen atmosphere, which is little explored outside my laboratory. DC arc discharge evaporation of pure graphite rod in pure hydrogen gas results in multi-walled carbon nanotubes (MWCNTs) of high crystallinity in the cathode deposit. As-grown MWCNTs have very narrow inner diameter. Raman spectra of these MWCNTs show high-intensity G-band, unusual high-frequency radial breathing mode at 570 cm(-1), and a new characteristic peak near 1850 cm(-1). Exciting carbon nanowires (CNWs), consisting of a linear carbon chain in the center of MWCNTs are also produced. Arc evaporation of graphite rod containing metal catalysts results in single-wall carbon nanotubes (SWCNTs) in the whole chamber like macroscopic webs. Two kinds of arc method have been developed to produce SWCNTs: Arc plasma jet (APJ) and Ferrum-Hydrogen (FH) arc methods. Some new purification methods for as-produced SWCNTs are reviewed. Finally, double-walled carbon nanotubes (DWCNTs) are also described.

  1. 7 CFR 46.27 - Types of broker operations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... groupings by method of operation. The usual operation of brokers consists of the negotiation of the purchase.... Frequently, brokers never see the produce they are quoting for sale or negotiating for purchase by the buyer... operations are typified by the fact that they act as the buyer's representative in negotiating purchases at...

  2. Dictionary-based fiber orientation estimation with improved spatial consistency.

    PubMed

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  4. Accuracy and consistency of grass pollen identification by human analysts using electron micrographs of surface ornamentation1

    PubMed Central

    Mander, Luke; Baker, Sarah J.; Belcher, Claire M.; Haselhorst, Derek S.; Rodriguez, Jacklyn; Thorn, Jessica L.; Tiwari, Shivangi; Urrego, Dunia H.; Wesseln, Cassandra J.; Punyasena, Surangi W.

    2014-01-01

    • Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. • Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. • Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. • Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias. PMID:25202649

  5. Dynamic and quantitative method of analyzing service consistency evolution based on extended hierarchical finite state automata.

    PubMed

    Fan, Linjun; Tang, Jun; Ling, Yunxiang; Li, Benxian

    2014-01-01

    This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA). Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service's evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA) is constructed based on finite state automata (FSA), which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs) based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average) is the biggest influential factor, the noncomposition of atomic services (13.12%) is the second biggest one, and the service version's confusion (1.2%) is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA.

  6. Dynamic and Quantitative Method of Analyzing Service Consistency Evolution Based on Extended Hierarchical Finite State Automata

    PubMed Central

    Fan, Linjun; Tang, Jun; Ling, Yunxiang; Li, Benxian

    2014-01-01

    This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA). Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service's evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA) is constructed based on finite state automata (FSA), which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs) based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average) is the biggest influential factor, the noncomposition of atomic services (13.12%) is the second biggest one, and the service version's confusion (1.2%) is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA. PMID:24772033

  7. Reflow-oven-processing of pressureless sintered-silver interconnects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wereszczak, Andrew A.; Chen, Branndon R.; Oistad, Brian A.

    Here, a method was developed to pressurelessly fabricate strong and consistent sinterable-silver joints or interconnects using reflow oven heating. Circular sinterable-silver interconnects, having nominal diameter of 5 mm and 0.1 mm thickness were stencil printed, contact-dried, and then pressurelessly sinter-bonded to Au-plated direct copper bonded ceramic substrates at 250 °C in ambient air. That sintering was done in either a reflow oven or a convective oven (latter being a conventional heating source for processing sinterable-silver). Consistently strong (>40 MPa) interconnects were produced with reflow oven heating and were as strong as those produced with convective oven heating. This is significantmore » because reflow oven technology affords better potential for continuous mass production and it was shown that strong sintered-silver bonds can indeed be achieved with its use.« less

  8. Reflow-oven-processing of pressureless sintered-silver interconnects

    DOE PAGES

    Wereszczak, Andrew A.; Chen, Branndon R.; Oistad, Brian A.

    2018-01-04

    Here, a method was developed to pressurelessly fabricate strong and consistent sinterable-silver joints or interconnects using reflow oven heating. Circular sinterable-silver interconnects, having nominal diameter of 5 mm and 0.1 mm thickness were stencil printed, contact-dried, and then pressurelessly sinter-bonded to Au-plated direct copper bonded ceramic substrates at 250 °C in ambient air. That sintering was done in either a reflow oven or a convective oven (latter being a conventional heating source for processing sinterable-silver). Consistently strong (>40 MPa) interconnects were produced with reflow oven heating and were as strong as those produced with convective oven heating. This is significantmore » because reflow oven technology affords better potential for continuous mass production and it was shown that strong sintered-silver bonds can indeed be achieved with its use.« less

  9. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    NASA Technical Reports Server (NTRS)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  10. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  11. Utilizing formative evaluation to enhance the understanding of chemistry and the methods and procedures of science

    NASA Astrophysics Data System (ADS)

    Pizzini, Edward L.; Treagust, David F.; Cody, John

    The purpose of this study was to determine whether or not formative evaluation could facilitate goal attainment in a biochemistry course and produce desired learning outcomes consistently by altering course materials and/or instruction. Formative evaluation procedures included the administration of the Inorganic-Organic-Biological Chemistry Test Form 1974 and the Methods and Procedures of Science test to course participants over three consecutive years. A one group pretest-post-test design was used. The statistical analysis involved the use of the Wilcoxon matched-pairs signed-ranks test. The study involved 64 participants. The findings indicate that the use of formative evaluation can be effective in producing desired learning outcomes to facilitate goal attainment.

  12. Identification of an Efficient Gene Expression Panel for Glioblastoma Classification

    PubMed Central

    Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni

    2016-01-01

    We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170

  13. Development of a method of alignment between various SOLAR MAXIMUM MISSION experiments

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Results of an engineering study of the methods of alignment between various experiments for the solar maximum mission are described. The configuration studied consists of the instruments, mounts and instrument support platform located within the experiment module. Hardware design, fabrication methods and alignment techniques were studied with regard to optimizing the coalignment between the experiments and the fine sun sensor. The proposed hardware design was reviewed with regard to loads, stress, thermal distortion, alignment error budgets, fabrication techniques, alignment techniques and producibility. Methods of achieving comparable alignment accuracies on previous projects were also reviewed.

  14. A simplified method for extracting androgens from avian egg yolks

    USGS Publications Warehouse

    Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.

    2009-01-01

    Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.

  15. Ceramic Honeycomb Structures and Method Thereof

    NASA Technical Reports Server (NTRS)

    Cagliostro, Domenick E.; Riccitiello, Salvatore R.

    1989-01-01

    The present invention relates to a method for producing ceramic articles and the articles, the process comprising the chemical vapor deposition (CVD) and/or chemical vapor infiltration (CVI) of a honeycomb structure. Specifically the present invention relates to a method for the production of a ceramic honeycomb structure, including: (a) obtaining a loosely woven fabric/binder wherein the fabric consists essentially of metallic, ceramic or organic fiber and the binder consists essentially of an organic or inorganic material wherein the fabric/binder has and retains a honeycomb shape, with the proviso that when the fabric is metallic or ceramic the binder is organic only; (b) substantially evenly depositing at least one layer of a ceramic on the fabric/binder of step (a); and (c) recovering the ceramic coated fiber honeycomb structure. In another aspect, the present invention relates to a method for the manufacture of a lightweight ceramic-ceramic composite honeycomb structure, which process comprises: (d) pyrolyzing a loosely woven fabric a honeycomb shaped and having a high char yield and geometric integrity after pyrolysis at between about 700 degrees and 1,100 degrees Centigrade; (e) substantially evenly depositing at least one layer of ceramic material on the pyrolyzed fabric of step (a); and (f) recovering the coated ceramic honeycomb structure. The ceramic articles produced have enhanced physical properties and are useful in aircraft and aerospace uses.

  16. Oncoprotein protein kinase

    DOEpatents

    Karin, Michael; Hibi, Masahiko; Lin, Anning

    2002-01-29

    The present invention provides an isolated polynucleotide encoding a c-Jun peptide consisting of about amino acid residues 33 to 79 as set fort in SEQ ID NO: 10 or conservative variations thereof. The invention also provides a method for producing a peptide of SEQ ID NO:1 comprising (a) culturing a host cell containing a polynucleotide encoding a c-Jun peptide consisting of about amino acid residues 33 to 79 as set forth in SEQ ID NO: 10 under conditions which allow expression of the polynucleotide; and (b) obtaining the peptide of SEQ ID NO:1.

  17. Verification of the Accountability Method as a Means to Classify Radioactive Wastes Processed Using THOR Fluidized Bed Steam Reforming at the Studsvik Processing Facility in Erwin, Tennessee, USA - 13087

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olander, Jonathan; Myers, Corey

    2013-07-01

    Studsviks' Processing Facility Erwin (SPFE) has been treating Low-Level Radioactive Waste using its patented THOR process for over 13 years. Studsvik has been mixing and processing wastes of the same waste classification but different chemical and isotopic characteristics for the full extent of this period as a general matter of operations. Studsvik utilizes the accountability method to track the movement of radionuclides from acceptance of waste, through processing, and finally in the classification of waste for disposal. Recently the NRC has proposed to revise the 1995 Branch Technical Position on Concentration Averaging and Encapsulation (1995 BTP on CA) with additionalmore » clarification (draft BTP on CA). The draft BTP on CA has paved the way for large scale blending of higher activity and lower activity waste to produce a single waste for the purpose of classification. With the onset of blending in the waste treatment industry, there is concern from the public and state regulators as to the robustness of the accountability method and the ability of processors to prevent the inclusion of hot spots in waste. To address these concerns and verify the accountability method as applied by the SPFE, as well as the SPFE's ability to control waste package classification, testing of actual waste packages was performed. Testing consisted of a comprehensive dose rate survey of a container of processed waste. Separately, the waste package was modeled chemically and radiologically. Comparing the observed and theoretical data demonstrated that actual dose rates were lower than, but consistent with, modeled dose rates. Moreover, the distribution of radioactivity confirms that the SPFE can produce a radiologically homogeneous waste form. The results of the study demonstrate: 1) the accountability method as applied by the SPFE is valid and produces expected results; 2) the SPFE can produce a radiologically homogeneous waste; and 3) the SPFE can effectively control the waste package classification. (authors)« less

  18. Automated Production of Movies on a Cluster of Computers

    NASA Technical Reports Server (NTRS)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  19. An evaluation of tyramide signal amplification and archived fixed and frozen tissue in microarray gene expression analysis

    PubMed Central

    Karsten, Stanislav L.; Van Deerlin, Vivianna M. D.; Sabatti, Chiara; Gill, Lisa H.; Geschwind, Daniel H.

    2002-01-01

    Archival formalin-fixed, paraffin-embedded and ethanol-fixed tissues represent a potentially invaluable resource for gene expression analysis, as they are the most widely available material for studies of human disease. Little data are available evaluating whether RNA obtained from fixed (archival) tissues could produce reliable and reproducible microarray expression data. Here we compare the use of RNA isolated from human archival tissues fixed in ethanol and formalin to frozen tissue in cDNA microarray experiments. Since an additional factor that can limit the utility of archival tissue is the often small quantities available, we also evaluate the use of the tyramide signal amplification method (TSA), which allows the use of small amounts of RNA. Detailed analysis indicates that TSA provides a consistent and reproducible signal amplification method for cDNA microarray analysis, across both arrays and the genes tested. Analysis of this method also highlights the importance of performing non-linear channel normalization and dye switching. Furthermore, archived, fixed specimens can perform well, but not surprisingly, produce more variable results than frozen tissues. Consistent results are more easily obtainable using ethanol-fixed tissues, whereas formalin-fixed tissue does not typically provide a useful substrate for cDNA synthesis and labeling. PMID:11788730

  20. Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions

    PubMed Central

    2015-01-01

    We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856

  1. METAL PRODUCTION AND CASTING

    DOEpatents

    Magel, T.T.

    1958-03-01

    This patent covers a method and apparatus for collecting the molten metal produced by high temperature metal salt reduction. It consists essentially of subjecting the reaction vessel to centrifugal force in order to force the liberatcd molten metal into a coherent molten mass, and allowing it to solidify there. The apparatus is particularly suitable for use with small quantities of rare metals.

  2. HOT PRESSING WITH A TEMPERATURE GRADIENT

    DOEpatents

    Hausner, H.H.

    1958-05-20

    A method is described for producing powder metal compacts with a high length to width ratio, which are of substantially uniform density. The process consists in arranging a heating coil around the die and providing a temperature gradient along the length of the die with the highest temperature at the point of the compact farthest away from the ram or plunger.

  3. Developing Mathematics Problems Based on PISA Level of Change and Relationships Content

    ERIC Educational Resources Information Center

    Ahyan, Shahibul; Zulkardi; Darmawijoyo

    2014-01-01

    This research aims to produce mathematics problems based on PISA level with valid and practical content of change and relationships and has potential effect for Junior High School students. A development research method developed by Akker, Gravemeijer, McKenney and Nieveen is used this research. This development research consists of three stages;…

  4. Quantifying and reducing the differences in forest CO 2-fluxes estimated by eddy covariance, biometric and chamber methods: A global synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingchang; Wang, Chuankuan; Bond-Lamberty, Benjamin

    Carbon dioxide (CO 2) fluxes between terrestrial ecosystems and the atmosphere are primarily measured with eddy covariance (EC), biometric, and chamber methods. However, it is unclear why the estimates of CO 2-fluxes, when measured using these different methods, converge at some sites but diverge at others. We synthesized a novel global dataset of forest CO 2-fluxes to evaluate the consistency between EC and biometric or chamber methods for quantifying CO 2 budget in forests. The EC approach, comparing with the other two methods, tended to produce 25% higher estimate of net ecosystem production (NEP, 0.52Mg C ha-1 yr-1), mainly resultingmore » from lower EC-estimated Re; 10% lower ecosystem respiration (Re, 1.39Mg C ha-1 yr-1); and 3% lower gross primary production (0.48 Mg C ha-1 yr-1) The discrepancies between EC and the other methods were higher at sites with complex topography and dense canopies versus those with flat topography and open canopies. Forest age also influenced the discrepancy through the change of leaf area index. The open-path EC system induced >50% of the discrepancy in NEP, presumably due to its surface heating effect. These results provided strong evidence that EC produces biased estimates of NEP and Re in forest ecosystems. A global extrapolation suggested that the discrepancies in CO 2 fluxes between methods were consistent with a global underestimation of Re, and overestimation of NEP, by the EC method. Accounting for these discrepancies would substantially improve the our estimates of the terrestrial carbon budget .« less

  5. The space of ultrametric phylogenetic trees.

    PubMed

    Gavryushkin, Alex; Drummond, Alexei J

    2016-08-21

    The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Methods of producing porous platinum-based catalysts for oxygen reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlebacher, Jonah D.; Snyder, Joshua D.

    A porous metal that comprises platinum and has a specific surface area that is greater than 5 m 2/g and less than 75 m 2/g. A fuel cell includes a first electrode, a second electrode spaced apart from the first electrode, and an electrolyte arranged between the first and the second electrodes. At least one of the first and second electrodes is coated with a porous metal catalyst for oxygen reduction, and the porous metal catalyst comprises platinum and has a specific surface area that is greater than 5 m 2/g and less than 75 m 2/g. A method ofmore » producing a porous metal according to an embodiment of the current invention includes producing an alloy consisting essentially of platinum and nickel according to the formula Pt xNi 1-x, where x is at least 0.01 and less than 0.3; and dealloying the alloy in a substantially pH neutral solution to reduce an amount of nickel in the alloy to produce the porous metal.« less

  7. Development of a Solid-State Fermentation System for Producing Bioethanol from Food Waste

    NASA Astrophysics Data System (ADS)

    Honda, Hiroaki; Ohnishi, Akihiro; Fujimoto, Naoshi; Suzuki, Masaharu

    Liquid fermentation is the a conventional method of producing bioethanol. However, this method results in the formation of high concentrations waste after distillation and futher treatment requires more energy and is costly(large amounts of costly energy).Saccharification of dried raw garbage was tested for 12 types of Koji starters under the following optimum culture conditions: temperature of 30°C and initial moisture content of 50%.Among all the types, Aspergillus oryzae KBN650 had the highest saccharifying power. The ethanol-producing ability of the raw garbage was investigated for 72 strains of yeast, of which Saccharomyces cerevisiae A30 had the highest ethanol production(yield)under the following optimum conditions: 1 :1 ratio of dried garbage and saccharified garbage by weight, and initial moisture content of 60%. Thus, the solid-state fermentation system consisted of the following 4 processes: moisture control, saccharification, ethanol production and distillation. This system produced 0.6kg of ethanol from 9.6kg of garbage. Moreover the ethanol yield from all sugars was calculated to be 0.37.

  8. Novel applications of the temporal kernel method: Historical and future radiative forcing

    NASA Astrophysics Data System (ADS)

    Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.

    2017-12-01

    We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.

  9. Oil core microcapsules by inverse gelation technique.

    PubMed

    Martins, Evandro; Renard, Denis; Davy, Joëlle; Marquis, Mélanie; Poncelet, Denis

    2015-01-01

    A promising technique for oil encapsulation in Ca-alginate capsules by inverse gelation was proposed by Abang et al. This method consists of emulsifying calcium chloride solution in oil and then adding it dropwise in an alginate solution to produce Ca-alginate capsules. Spherical capsules with diameters around 3 mm were produced by this technique, however the production of smaller capsules was not demonstrated. The objective of this study is to propose a new method of oil encapsulation in a Ca-alginate membrane by inverse gelation. The optimisation of the method leads to microcapsules with diameters around 500 μm. In a search of microcapsules with improved diffusion characteristics, the size reduction is an essential factor to broaden the applications in food, cosmetics and pharmaceuticals areas. This work contributes to a better understanding of the inverse gelation technique and allows the production of microcapsules with a well-defined shell-core structure.

  10. Microbially-mediated method for synthesis of non-oxide semiconductor nanoparticles

    DOEpatents

    Phelps, Tommy J.; Lauf, Robert J.; Moon, Ji Won; Rondinone, Adam J.; Love, Lonnie J.; Duty, Chad Edward; Madden, Andrew Stephen; Li, Yiliang; Ivanov, Ilia N.; Rawn, Claudia Jeanette

    2014-06-24

    The invention is directed to a method for producing non-oxide semiconductor nanoparticles, the method comprising: (a) subjecting a combination of reaction components to conditions conducive to microbially-mediated formation of non-oxide semiconductor nanoparticles, wherein said combination of reaction components comprises i) anaerobic microbes, ii) a culture medium suitable for sustaining said anaerobic microbes, iii) a metal component comprising at least one type of metal ion, iv) a non-metal component containing at least one non-metal selected from the group consisting of S, Se, Te, and As, and v) one or more electron donors that provide donatable electrons to said anaerobic microbes during consumption of the electron donor by said anaerobic microbes; and (b) isolating said non-oxide semiconductor nanoparticles, which contain at least one of said metal ions and at least one of said non-metals. The invention is also directed to non-oxide semiconductor nanoparticle compositions produced as above and having distinctive properties.

  11. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  12. A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.

    PubMed

    Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa

    2017-04-01

    Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Lagrangian numerical methods for ocean biogeochemical simulations

    NASA Astrophysics Data System (ADS)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  14. An empirical evaluation of two-stage species tree inference strategies using a multilocus dataset from North American pines

    PubMed Central

    2014-01-01

    Background As it becomes increasingly possible to obtain DNA sequences of orthologous genes from diverse sets of taxa, species trees are frequently being inferred from multilocus data. However, the behavior of many methods for performing this inference has remained largely unexplored. Some methods have been proven to be consistent given certain evolutionary models, whereas others rely on criteria that, although appropriate for many parameter values, have peculiar zones of the parameter space in which they fail to converge on the correct estimate as data sets increase in size. Results Here, using North American pines, we empirically evaluate the behavior of 24 strategies for species tree inference using three alternative outgroups (72 strategies total). The data consist of 120 individuals sampled in eight ingroup species from subsection Strobus and three outgroup species from subsection Gerardianae, spanning ∼47 kilobases of sequence at 121 loci. Each “strategy” for inferring species trees consists of three features: a species tree construction method, a gene tree inference method, and a choice of outgroup. We use multivariate analysis techniques such as principal components analysis and hierarchical clustering to identify tree characteristics that are robustly observed across strategies, as well as to identify groups of strategies that produce trees with similar features. We find that strategies that construct species trees using only topological information cluster together and that strategies that use additional non-topological information (e.g., branch lengths) also cluster together. Strategies that utilize more than one individual within a species to infer gene trees tend to produce estimates of species trees that contain clades present in trees estimated by other strategies. Strategies that use the minimize-deep-coalescences criterion to construct species trees tend to produce species tree estimates that contain clades that are not present in trees estimated by the Concatenation, RTC, SMRT, STAR, and STEAC methods, and that in general are more balanced than those inferred by these other strategies. Conclusions When constructing a species tree from a multilocus set of sequences, our observations provide a basis for interpreting differences in species tree estimates obtained via different approaches that have a two-stage structure in common, one step for gene tree estimation and a second step for species tree estimation. The methods explored here employ a number of distinct features of the data, and our analysis suggests that recovery of the same results from multiple methods that tend to differ in their patterns of inference can be a valuable tool for obtaining reliable estimates. PMID:24678701

  15. An empirical evaluation of two-stage species tree inference strategies using a multilocus dataset from North American pines.

    PubMed

    DeGiorgio, Michael; Syring, John; Eckert, Andrew J; Liston, Aaron; Cronn, Richard; Neale, David B; Rosenberg, Noah A

    2014-03-29

    As it becomes increasingly possible to obtain DNA sequences of orthologous genes from diverse sets of taxa, species trees are frequently being inferred from multilocus data. However, the behavior of many methods for performing this inference has remained largely unexplored. Some methods have been proven to be consistent given certain evolutionary models, whereas others rely on criteria that, although appropriate for many parameter values, have peculiar zones of the parameter space in which they fail to converge on the correct estimate as data sets increase in size. Here, using North American pines, we empirically evaluate the behavior of 24 strategies for species tree inference using three alternative outgroups (72 strategies total). The data consist of 120 individuals sampled in eight ingroup species from subsection Strobus and three outgroup species from subsection Gerardianae, spanning ∼47 kilobases of sequence at 121 loci. Each "strategy" for inferring species trees consists of three features: a species tree construction method, a gene tree inference method, and a choice of outgroup. We use multivariate analysis techniques such as principal components analysis and hierarchical clustering to identify tree characteristics that are robustly observed across strategies, as well as to identify groups of strategies that produce trees with similar features. We find that strategies that construct species trees using only topological information cluster together and that strategies that use additional non-topological information (e.g., branch lengths) also cluster together. Strategies that utilize more than one individual within a species to infer gene trees tend to produce estimates of species trees that contain clades present in trees estimated by other strategies. Strategies that use the minimize-deep-coalescences criterion to construct species trees tend to produce species tree estimates that contain clades that are not present in trees estimated by the Concatenation, RTC, SMRT, STAR, and STEAC methods, and that in general are more balanced than those inferred by these other strategies. When constructing a species tree from a multilocus set of sequences, our observations provide a basis for interpreting differences in species tree estimates obtained via different approaches that have a two-stage structure in common, one step for gene tree estimation and a second step for species tree estimation. The methods explored here employ a number of distinct features of the data, and our analysis suggests that recovery of the same results from multiple methods that tend to differ in their patterns of inference can be a valuable tool for obtaining reliable estimates.

  16. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  17. A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.

    PubMed

    Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W

    2009-03-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.

  18. A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics

    PubMed Central

    Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.

    2009-01-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007

  19. Angular distribution, kinetic energy distributions, and excitation functions of fast metastable oxygen fragments following electron impact of CO2

    NASA Technical Reports Server (NTRS)

    Misakian, M.; Mumma, M. J.; Faris, J. F.

    1975-01-01

    Dissociative excitation of CO2 by electron impact was studied using the methods of translational spectroscopy and angular distribution analysis. Earlier time of flight studies revealed two overlapping spectra, the slower of which was attributed to metastable CO(a3 pi) fragments. The fast peak is the focus of this study. Threshold energy, angular distribution, and improve time of flight measurements indicate that the fast peak actually consists of five overlapping features. The slowest of the five features is found to consist of metastable 0(5S) produced by predissociation of a sigma u + state of CO2 into 0(5S) + CO(a3 pi). Oxygen Rydberg fragments originating directly from a different sigma u + state are believed to make up the next fastest feature. Mechanisms for producing the three remaining features are discussed.

  20. Shear velocity estimates on the inner shelf off Grays Harbor, Washington, USA

    USGS Publications Warehouse

    Sherwood, C.R.; Lacy, J.R.; Voulgaris, G.

    2006-01-01

    Shear velocity was estimated from current measurements near the bottom off Grays Harbor, Washington between May 4 and June 6, 2001 under mostly wave-dominated conditions. A downward-looking pulse-coherent acoustic Doppler profiler (PCADP) and two acoustic-Doppler velocimeters (field version; ADVFs) were deployed on a tripod at 9-m water depth. Measurements from these instruments were used to estimate shear velocity with (1) a modified eddy-correlation (EC) technique, (2) the log-profile (LP) method, and (3) a dissipation-rate method. Although values produced by the three methods agreed reasonably well (within their broad ranges of uncertainty), there were important systematic differences. Estimates from the EC method were generally lowest, followed by those from the inertial-dissipation method. The LP method produced the highest values and the greatest scatter. We show that these results are consistent with boundary-layer theory when sediment-induced stratification is present. The EC method provides the most fundamental estimate of kinematic stress near the bottom, and stratification causes the LP method to overestimate bottom stress. These results remind us that the methods are not equivalent and that comparison among sites and with models should be made carefully. ?? 2006 Elsevier Ltd. All rights reserved.

  1. A Comparison of Fabrication Techniques for Hollow Retroreflectors

    NASA Technical Reports Server (NTRS)

    Preston, Alix; Merkowitz, Stephen

    2014-01-01

    Despite the wide usage of hollow retroreflectors, there is limited literature involving their fabrication techniques and only two documented construction methods could be found. One consists of an adjustable fixture that allows for the independent alignment of each mirror, while the other consists of a modified solid retroreflector that is used as a mandrel. Although both methods were shown to produce hollow retroreflectors with arcsecond dihedral angle errors, a comparison and analysis of each method could not be found which makes it difficult to ascertain which method would be better suited to use for precision-aligned retroreflectors. Although epoxy bonding is generally the preferred method to adhere the three mirrors, a relatively new method known as hydroxide-catalysis bonding (HCB) presents several potential advantages over epoxy bonding. HCB has been used to bond several optical components for space-based missions, but has never been applied for construction of hollow retroreflectors. In this paper we examine the benefits and limitations of each bonding fixture as well as present results and analysis of hollow retroreflectors made using both epoxy and HCB techniques.

  2. Measuring signal-to-noise ratio in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Clarke, Geoffrey D.

    2011-01-01

    Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049

  3. Microfluidic model of the platelet-generating organ: beyond bone marrow biomimetics

    NASA Astrophysics Data System (ADS)

    Blin, Antoine; Le Goff, Anne; Magniez, Aurélie; Poirault-Chassac, Sonia; Teste, Bruno; Sicot, Géraldine; Nguyen, Kim Anh; Hamdi, Feriel S.; Reyssat, Mathilde; Baruch, Dominique

    2016-02-01

    We present a new, rapid method for producing blood platelets in vitro from cultured megakaryocytes based on a microfluidic device. This device consists in a wide array of VWF-coated micropillars. Such pillars act as anchors on megakaryocytes, allowing them to remain trapped in the device and subjected to hydrodynamic shear. The combined effect of anchoring and shear induces the elongation of megakaryocytes and finally their rupture into platelets and proplatelets. This process was observed with megakaryocytes from different origins and found to be robust. This original bioreactor design allows to process megakaryocytes at high throughput (millions per hour). Since platelets are produced in such a large amount, their extensive biological characterisation is possible and shows that platelets produced in this bioreactor are functional.

  4. Microfluidic model of the platelet-generating organ: beyond bone marrow biomimetics.

    PubMed

    Blin, Antoine; Le Goff, Anne; Magniez, Aurélie; Poirault-Chassac, Sonia; Teste, Bruno; Sicot, Géraldine; Nguyen, Kim Anh; Hamdi, Feriel S; Reyssat, Mathilde; Baruch, Dominique

    2016-02-22

    We present a new, rapid method for producing blood platelets in vitro from cultured megakaryocytes based on a microfluidic device. This device consists in a wide array of VWF-coated micropillars. Such pillars act as anchors on megakaryocytes, allowing them to remain trapped in the device and subjected to hydrodynamic shear. The combined effect of anchoring and shear induces the elongation of megakaryocytes and finally their rupture into platelets and proplatelets. This process was observed with megakaryocytes from different origins and found to be robust. This original bioreactor design allows to process megakaryocytes at high throughput (millions per hour). Since platelets are produced in such a large amount, their extensive biological characterisation is possible and shows that platelets produced in this bioreactor are functional.

  5. Development of batch producible hot embossing 3D nanostructured surface-enhanced Raman scattering chip technology

    NASA Astrophysics Data System (ADS)

    Huang, Chu-Yu; Tsai, Ming-Shiuan

    2017-09-01

    The main purpose of this study is to develop a batch producible hot embossing 3D nanostructured surface-enhanced Raman chip technology for high sensitivity label-free plasticizer detection. This study utilizing the AAO self-assembled uniform nano-hemispherical array barrier layer as a template to create a durable nanostructured nickel mold. With the hot embossing technique and the durable nanostructured nickel mold, we are able to batch produce the 3D Nanostructured Surface-enhanced Raman Scattering Chip with consistent quality. In addition, because of our SERS chip can be fabricated by batch processing, the fabrication cost is low. Therefore, the developed method is very promising to be widespread and extensively used in rapid chemical and biomolecular detection applications.

  6. Method for catalytic destruction of organic materials

    DOEpatents

    Sealock, Jr., L. John; Baker, Eddie G.; Elliott, Douglas C.

    1997-01-01

    A method is disclosed for converting waste organic materials into an innocuous product gas. The method comprises maintaining, in a pressure vessel, in the absence of oxygen, at a temperature of 250.degree. C. to 500.degree. C. and a pressure of at least 50 atmospheres, a fluid organic waste material, water, and a catalyst consisting essentially of reduced nickel in an amount sufficient to catalyze a reaction of the organic waste material to produce an innocuous product gas composed primarily of methane and carbon dioxide. The methane in the product gas may be burned to preheat the organic materials.

  7. METHOD OF MAKING METAL BONDED CARBON BODIES

    DOEpatents

    Goeddel, W.V.; Simnad, M.T.

    1961-09-26

    A method of producing carbon bodies having high structural strength and low permeability is described. The method comprises mixing less than 10 wt.% of a diffusional bonding material selected from the group consisting of zirconium, niobium, molybdenum, titanium, nickel, chromium, silicon, and decomposable compounds thereof with finely divided particles of carbon or graphite. While being maintained at a mechanical pressure over 3,000 psi, the mixture is then heated uniformly to a temperature of 1500 deg C or higher, usually for less than one hour. The resulting carbon bodies have a low diffusion constant, high dimensional stability, and high mechanical strength.

  8. Computer program for design of two-dimensional supersonic turbine rotor blades with boundary-layer correction

    NASA Technical Reports Server (NTRS)

    Goldman, L. J.; Scullin, V. J.

    1971-01-01

    A FORTRAN 4 computer program for the design of two-dimensional supersonic rotor blade sections corrected for boundary-layer displacement thickness is presented. The ideal rotor is designed by the method of characteristics to produce vortex flow within the blade passage. The boundary-layer parameters are calculated by Cohen and Reshotoko's method for laminar flow and Sasman and Cresci's method for turbulent flow. The program input consists essentially of the blade surface Mach number distribution and total flow conditions. The primary output is the corrected blade profile and the boundary-layer parameters.

  9. Method for catalytic destruction of organic materials

    DOEpatents

    Sealock, L.J. Jr.; Baker, E.G.; Elliott, D.C.

    1997-05-20

    A method is disclosed for converting waste organic materials into an innocuous product gas. The method comprises maintaining, in a pressure vessel, in the absence of oxygen, at a temperature of 250 to 500 C and a pressure of at least 50 atmospheres, a fluid organic waste material, water, and a catalyst consisting essentially of reduced nickel in an amount sufficient to catalyze a reaction of the organic waste material to produce an innocuous product gas composed primarily of methane and carbon dioxide. The methane in the product gas may be burned to preheat the organic materials. 7 figs.

  10. Asymptotic approximation method of force reconstruction: Application and analysis of stationary random forces

    NASA Astrophysics Data System (ADS)

    Sanchez, J.

    2018-06-01

    In this paper, the application and analysis of the asymptotic approximation method to a single degree-of-freedom has recently been produced. The original concepts are summarized, and the necessary probabilistic concepts are developed and applied to single degree-of-freedom systems. Then, these concepts are united, and the theoretical and computational models are developed. To determine the viability of the proposed method in a probabilistic context, numerical experiments are conducted, and consist of a frequency analysis, analysis of the effects of measurement noise, and a statistical analysis. In addition, two examples are presented and discussed.

  11. High conductivity composite metal

    DOEpatents

    Zhou, Ruoyi; Smith, James L.; Embury, John David

    1998-01-01

    Electrical conductors and methods of producing them, where the conductors possess both high strength and high conductivity. Conductors are comprised of carbon steel and a material chosen from a group consisting of copper, nickel, silver, and gold. Diffusion barriers are placed between these two materials. The components of a conductor are assembled and then the assembly is subjected to heat treating and mechanical deformation steps.

  12. Development of advanced test methods for the improvement of production standards for ceramic powders used in solid oxide fuel cells

    NASA Astrophysics Data System (ADS)

    Ward, Brian

    Solid oxide fuel cells (SOFCs) are energy conversion devices that use ceramic powders as a precursor material for their electrodes. Presently, powder manufacturers are encountering complications producing consistent precursor powders. Through various thermal, chemical and physical tests, such as DSC and XRD, a preliminary production standard will be developed.

  13. IR spectroscopic studies in microchannel structures

    NASA Astrophysics Data System (ADS)

    Guber, A. E.; Bier, W.

    1998-06-01

    By means of the various microengineering methods available, microreaction systems can be produced among others. These microreactors consist of microchannels, where chemical reactions take place under defined conditions. For optimum process control, continuous online analytics is envisaged in the microchannels. For this purpose, a special analytical module has been developed. It may be applied for IR spectroscopic studies at any point of the microchannel.

  14. Analysis of 21-cm tomographic data

    NASA Astrophysics Data System (ADS)

    Mellema, Garrelt; Giri, Sambit; Ghara, Raghuna

    2018-05-01

    The future SKA1-Low radio telescope will be powerful enough to produce tomographic images of the 21-cm signal from the Epoch of Reionization. Here we address how to identify ionized regions in such data sets, taking into account the resolution and noise levels associated with SKA1-Low. We describe three methods of which one, superpixel oversegmentation, consistently performs best.

  15. High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters

    DTIC Science & Technology

    2017-04-22

    signatures which can be used for direct, non -invasive, comparison with experimental diagnostics can be produced. This research will be directly... experimental campaign is critical to developing general design philosophies for low-power plasmoid formation, the complexity of non -linear plasma processes...advanced space propulsion. The work consists of numerical method development, physical model development, and systematic studies of the non -linear

  16. [Oral and written affective expression in children of low socioeconomic status].

    PubMed

    Larraguibel, M; Lolas Stepke, F

    1991-06-01

    Descriptive data on affective expression of 58 children (33 girls and 25 boys) of low socioeconomic status (Graffar index), with ages between 8 and 12 are presented. Intelligence was assessed by means of Raven Progressive Matrixes Test, all subjects exhibiting mean level. Evaluated were the six forms of anxiety and the four hostility forms defined by the Gottschalk method of verbal content analysis. Hope scores, positive and negative, were also obtained from the same verbal samples. The oral sample consisted in speech produced spontaneously during 5 minutes, in response to a standard instruction, and the written sample consisted in brief stories produced under standardized conditions during 15 minutes. The most frequently expressed form of anxiety was separation anxiety, while the most frequently expressed form of hostility was directed outwards covert hostility. "Positive" hope was expressed more frequently than "negative" hope. Data are discussed in terms of their contribution to the establishment of population norms in Spanish-speaking populations for the psychological constructs explored. It is concluded that the method of content analysis of verbal behavior may represent a useful tool for the study of child psychology in different contexts.

  17. Solvent exchange method: a novel microencapsulation technique using dual microdispensers.

    PubMed

    Yeo, Yoon; Chen, Alvin U; Basaran, Osman A; Park, Kinam

    2004-08-01

    A new microencapsulation method called the "solvent exchange method" was developed using a dual microdispenser system. The objective of this research is to demonstrate the new method and understand how the microcapsule size is controlled by different instrumental parameters. The solvent exchange method was carried out using a dual microdispenser system consisting of two ink-jet nozzles. Reservoir-type microcapsules were generated by collision of microdrops of an aqueous and a polymer solution and subsequent formation of polymer films at the interface between the two solutions. The prepared microcapsules were characterized by microscopic methods. The ink-jet nozzles produced drops of different sizes with high accuracy according to orifice size of a nozzle, flow rate of the jetted solutions, and forcing frequency of the piezoelectric transducers. In an individual microcapsule, an aqueous core was surrounded by a thin polymer membrane; thus, the size of the collected microcapsules was equivalent to that of single drops. The solvent exchange method based on a dual microdispenser system produces reservoir-type microcapsules in a homogeneous and predictable manner. Given the unique geometry of the microcapsules and mildness of the encapsulation process, this method is expected to provide a useful alternative to existing techniques in protein microencapsulation.

  18. A new method for measuring low resistivity contacts between silver and YBa2Cu3O(7-x) superconductor

    NASA Technical Reports Server (NTRS)

    Hsi, Chi-Shiung; Haertling, Gene H.; Sherrill, Max D.

    1991-01-01

    Several methods of measuring contact resistivity between silver electrodes and YBa2Cu3O(7-x) superconductors were investigated; including the two-point, the three point, and the lap-joint methods. The lap-joint method was found to yield the most consistent and reliable results and is proposed as a new technique for this measurement. Painting, embedding, and melting methods were used to apply the electrodes to the superconductor. Silver electrodes produced good ohmic contacts to YBa2Cu3O(7-x) superconductors with contact resistivities as low as 1.9 x 10 to the -9th ohm sq cm.

  19. Within-individual correlations reveal link between a behavioral syndrome, condition and cortisol in free-ranging Belding's ground squirrels

    PubMed Central

    Brooks, Katherine C.; Mateo, Jill. M.

    2014-01-01

    Animals often exhibit consistent individual differences in behavior (i.e. animal personality) and correlations between behaviors (i.e. behavioral syndromes), yet the causes of those patterns of behavioral variation remain insufficiently understood. Many authors hypothesize that state-dependent behavior produces animal personality and behavioral syndromes. However, empirical studies assessing patterns of covariation among behavioral traits and state variables have produced mixed results. New statistical methods that partition correlations into between-individual and residual within-individual correlations offer an opportunity to more sufficiently quantify relationships among behaviors and state variables to assess hypotheses of animal personality and behavioral syndromes. In a population of wild Belding's ground squirrels (Urocitellus beldingi) we repeatedly measured activity, exploration, and response to restraint behaviors alongside glucocorticoids and nutritional condition. We used multivariate mixed models to determine whether between-individual or within-individual correlations drive phenotypic relationships among traits. Squirrels had consistent individual differences for all five traits. At the between-individual level, activity and exploration were positively correlated whereas both traits negatively correlated with response to restraint, demonstrating a behavioral syndrome. At the within-individual level, condition negatively correlated with cortisol, activity and exploration. Importantly, this indicates that although behavior is state-dependent, which may play a role in animal personality and behavioral syndromes, feedback mechanisms between condition and behavior appear not to produce consistent individual differences in behavior and correlations between them. PMID:25598565

  20. Application of Wave Distribution Function Method to the ERG/PWE Data

    NASA Astrophysics Data System (ADS)

    Ota, M.; Kasahara, Y.; Matsuda, S.; Kojima, H.; Matsuoka, A.; Hikishima, M.; Kasaba, Y.; Ozaki, M.; Yagitani, S.; Tsuchiya, F.; Kumamoto, A.

    2017-12-01

    The ERG (Arase) satellite was launched on 20 December 2016 to study acceleration and loss mechanisms of relativistic electrons in the Earth's magnetosphere. The Plasma Wave Experiment (PWE), which is one of the science instruments on board the ERG satellite, measures electric field and magnetic field. The PWE consists of three sub-systems; EFD (Electric Field Detector), OFA/WFC (Onboard Frequency Analyzer and Waveform Capture), and HFA (High Frequency Analyzer).The OFA/WFC measures electromagnetic field spectra and raw waveforms in the frequency range from few Hz to 20 kHz. The OFA produces three kind of data; OFA-SPEC (power spectrum), OFA-MATRIX (spectral matrix), and OFA-COMPLEX (complex spectrum). The OFA-MATRIX measures ensemble averaged complex cross-spectra of two electric field components, and of three magnetic field components. The OFA-COMPLEX measures instantaneous complex spectra of electric and magnetic fields. These data are produced every 8 seconds in the nominal mode, and it can be used for polarization analysis and wave propagation direction finding.In general, spectral matrix composed by cross-spectra of observed signals is used for direction finding, and many algorithms have been proposed. For example, Means method and SVD method can be applied on the assumption that the spectral matrix is consists of a single plane wave, while wave distribution function (WDF) method is applicable even to the data in which multiple numbers of plane waves are simultaneously included. In this presentation, we introduce the results when the WDF method is applied to the ERG/PWE data.

  1. Self-consistent formation of electron $\\kappa$ distribution: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yoon, Peter H.; Rhee, Tongnyeol; Ryu, Chang-Mo

    2006-09-01

    Since the early days of plasma physics research suprathermal electrons were observed to be generated during beam-plasma laboratory experiments. Energetic electrons, often modeled by κ distributions, are also ubiquitously observed in space. Various particle acceleration mechanisms have been proposed to explain such a feature, but all previous theories rely on either qualitative analytical method or on non-self-consistent approaches. This paper discusses the self-consistent acceleration of electrons to suprathermal energies by weak turbulence processes which involve the Langmuir/ion-sound turbulence and the beam-plasma interaction. It is discussed that the spontaneous scatttering process, which is absent in the purely collisionless theory, is singularly responsible for the generation of κ distributions. The conclusion is that purely collisionless Vlasov theory cannot produce suprathermal population.

  2. Patient-Provider Concordance with Behavioral Change Goals Drives Measures of Motivational Interviewing Consistency

    PubMed Central

    Laws, M. Barton; Rose, Gary S.; Beach, Mary Catherine; Lee, Yoojin; Rogers, William S.; Velasco, Alyssa Bianca; Wilson, Ira B.

    2015-01-01

    Objective Motivational Interviewing (MI) consistent talk by a counselor is thought to produce “change talk” in clients. However, it is possible that client resistance to behavior change can produce MI inconsistent counselor behavior. Methods We applied a coding scheme which identifies all of the behavioral counseling about a given issue during a visit (“episodes”), assesses patient concordance with the behavioral goal, and labels providers’ counseling style as facilitative or directive, to a corpus of routine outpatient visits by people with HIV. Using a different data set of comparable encounters, we applied the concepts of episode and concordance, and coded using the Motivational Interviewing Treatment Integrity system. Results Patient concordance/discordance was not observed to change during any episode. Provider directiveness was strongly associated with patient discordance in the first study, and MI inconsistency was strongly associated with discordance in the second. Conclusion Observations that MI-consistent behavior by medical providers is associated with patient change talk or outcomes should be evaluated cautiously, as patient resistance may provoke MI-inconsistency. Practice Implications Counseling episodes in routine medical visits are typically too brief for client talk to evolve toward change. Providers with limited training may have particular difficulty maintaining MI consistency with resistant clients. PMID:25791372

  3. Extraction of basil leaves (ocimum canum) oleoresin with ethyl acetate solvent by using soxhletation method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Purba, R. R. H.; Ginting, H. K.

    2017-09-01

    The goal of this research is to produce oleoresin from basil leaves (Ocimum canum) by using soxhletation method and ethyl acetate as solvent. Basil commonly used in culinary as fresh vegetables. Basil contains essential oils and oleoresin that are used as flavouring agent in food, in cosmetic and ingredient in traditional medicine. The extraction method commonly used to obtain oleoresin is maceration. The problem of this method is many solvents necessary and need time to extract the raw material. To resolve the problem and to produce more oleoresin, we use soxhletation method with a combination of extraction time and ratio from the material with a solvent. The analysis consists of yield, density, refractive index, and essential oil content. The best treatment of basil leaves oleoresin extraction is at ratio of material and solvent 1:6 (w / v) for 6 hours extraction time. In this condition, the yield of basil oleoresin is 20.152%, 0.9688 g/cm3 of density, 1.502 of refractive index, 15.77% of essential oil content, and the colour of oleoresin product is dark-green.

  4. A statistical evaluation of formation disturbance produced by well- casing installation methods

    USGS Publications Warehouse

    Morin, R.H.; LeBlanc, D.R.; Teasdale, W.E.

    1988-01-01

    Water-resources investigations concerned with contaminant transport through aquifers comprised of very loose, unconsolidated sediments have shown that small-scale variations in aquifer characteristics can significantly affect solute transport and dispersion. Commonly, measurement accuracy and resolution have been limited by a borehole environment consisting of an annulus of disturbed sediments produced by the casing-installation method. In an attempt to quantify this disturbance and recognize its impact on the characterization of unconsolidated deposits, three installation methods were examined and compared in a sand-and-gravel outwash at a test site on Cape Cod, Massachusetts. These installation methods were: 1) casing installed in a mud-rotary hole; 2) casing installed in an augered hole; and 3) flush-joint steel casing hammer-driven from land surface. Fifteen wells were logged with epithermal neutron and natural gamma tools. Concludes that augering is the most disruptive of the three casing-installation methods and that driving casing directly, though typically a more time-consuming operation, transmits the least amount of disturbance into the surrounding formation. -from Authors

  5. Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization

    PubMed Central

    Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos

    2015-01-01

    In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684

  6. High-resolution geological mapping at 3D Environments: A case study from the fold-and-thrust belt in northern Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Y. C.; Shih, N. C.; Hsieh, Y. C.

    2016-12-01

    Geologic maps have provided fundamental information for many scientific and engineering applications in human societies. Geologic maps directly influence the reliability of research results or the robustness of engineering projects. In the past, geologic maps were mainly produced by field geologists through direct field investigations and 2D topographic maps. However, the quality of traditional geologic maps was significantly compromised by field conditions, particularly, when the map area is covered by heavy forest canopies. Recent developments in airborne LiDAR technology may virtually remove trees or buildings, thus, providing a useful data set for improving geological mapping. Because high-quality topographic information still needs to be interpreted in terms of geology, there are many fundamental questions regarding how to best apply the data set for high-resolution geological mapping. In this study, we aim to test the quality and reliability of high-resolution geologic maps produced by recent technological methods through an example from the fold-and-thrust belt in northern Taiwan. We performed the geological mapping by applying the LiDAR-derived DEM, self-developed program tools and many layers of relevant information at interactive 3D environments. Our mapping results indicate that the proposed methods will considerably improve the quality and consistency of the geologic maps. The study also shows that in order to gain consistent mapping results, future high-resolution geologic maps should be produced at interactive 3D environments on the basis of existing geologic maps.

  7. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  8. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE PAGES

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-06-23

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  9. Rapid detection of Shigella and enteroinvasive Escherichia coli in produce enrichments by a conventional multiplex PCR assay.

    PubMed

    Binet, Rachel; Deer, Deanne M; Uhlfelder, Samantha J

    2014-06-01

    Faster detection of contaminated foods can prevent adulterated foods from being consumed and minimize the risk of an outbreak of foodborne illness. A sensitive molecular detection method is especially important for Shigella because ingestion of as few as 10 of these bacterial pathogens can cause disease. The objectives of this study were to compare the ability of four DNA extraction methods to detect Shigella in six types of produce, post-enrichment, and to evaluate a new and rapid conventional multiplex assay that targets the Shigella ipaH, virB and mxiC virulence genes. This assay can detect less than two Shigella cells in pure culture, even when the pathogen is mixed with background microflora, and it can also differentiate natural Shigella strains from a control strain and eliminate false positive results due to accidental laboratory contamination. The four DNA extraction methods (boiling, PrepMan Ultra [Applied Biosystems], InstaGene Matrix [Bio-Rad], DNeasy Tissue kit [Qiagen]) detected 1.6 × 10(3)Shigella CFU/ml post-enrichment, requiring ∼18 doublings to one cell in 25 g of produce pre-enrichment. Lower sensitivity was obtained, depending on produce type and extraction method. The InstaGene Matrix was the most consistent and sensitive and the multiplex assay accurately detected Shigella in less than 90 min, outperforming, to the best of our knowledge, molecular assays currently in place for this pathogen. Published by Elsevier Ltd.

  10. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  11. Cupping - is it reproducible? Experiments about factors determining the vacuum.

    PubMed

    Huber, R; Emerich, M; Braeunig, M

    2011-04-01

    Cupping is a traditional method for treating pain which is investigated nowadays in clinical studies. Because the methods for producing the vacuum vary considerably we tested their reproducibility. In a first set of experiments (study 1) four methods for producing the vacuum (lighter flame 2 cm (LF1), lighter flame 4 cm (LF2), alcohol flame (AF) and mechanical suction with a balloon (BA)) have been compared in 50 trials each. The cupping glass was prepared with an outlet and stop-cock, the vacuum was measured with a pressure-gauge after the cup was set to a soft rubber pad. In a second series of experiments (study 2) we investigated the stability of pressures in 20 consecutive trials in two experienced cupping practitioners and ten beginners using method AF. In study 1 all four methods yielded consistent pressures. Large differences in magnitude were, however, observed between methods (mean pressures -200±30 hPa with LF1, -310±30 hPa with LF2, -560±30 hPa with AF, and -270±16 hPa with BA). With method BA the standard deviation was reduced by a factor 2 compared to the flame methods. In study 2 beginners had considerably more difficulty obtaining a stable pressure yield than advanced cupping practitioners, showing a distinct learning curve before reaching expertise levels after about 10-20 trials. Cupping is reproducible if the exact method is described in detail. Mechanical suction with a balloon has the best reproducibility. Beginners need at least 10-20 trials to produce stable pressures. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Sol-gel synthesis and densification of aluminoborosilicate powders. Part 1: Synthesis

    NASA Technical Reports Server (NTRS)

    Bull, Jeffrey; Selvaduray, Guna; Leiser, Daniel

    1992-01-01

    Aluminoborosilicate powders high in alumina content were synthesized by the sol-gel process utilizing various methods of preparation. Properties and microstructural effects related to these syntheses were examined. After heating to 600 C for 2 h in flowing air, the powders were amorphous with the metal oxides comprising 87 percent of the weight and uncombusted organics the remainder. DTA of dried powders revealed a T(sub g) at approximately 835 C and an exotherm near 900 C due to crystallization. Powders derived from aluminum secbutoxide consisted of particles with a mean diameter 5 microns less than those from aluminum isopropoxide. Powders synthesized with aluminum isopropoxide produced agglomerates comprised of rod shaped particulates while powders made with the secbutoxide precursor produced irregular glassy shards. Compacts formed from these powders required different loadings for equivalent densities according to the method of synthesis.

  13. Noble metal nanostructures for double plasmon resonance with tunable properties

    NASA Astrophysics Data System (ADS)

    Petr, M.; Kylián, O.; Kuzminova, A.; Kratochvíl, J.; Khalakhan, I.; Hanuš, J.; Biederman, H.

    2017-02-01

    We report and compare two vacuum-based strategies to produce Ag/Au materials characterized by double plasmon resonance peaks: magnetron sputtering and method based on the use of gas aggregation sources (GAS) of nanoparticles. It was observed that the double plasmon resonance peaks may be achieved by both of these methods and that the intensities of individual localized surface plasmon resonance peaks may be tuned by deposition conditions. However, in the case of sputter deposition it was necessary to introduce a separation dielectric interlayer in between individual Ag and Au nanoparticle films which was not the case of films prepared by GAS systems. The differences in the optical properties of sputter deposited bimetallic Ag/Au films and coatings consisted of individual Ag and Au nanoparticles produced by GAS is ascribed to the divers mechanisms of nanoparticles formation.

  14. Racial determination of origin of mourning doves in hunters' bags

    USGS Publications Warehouse

    Aldrich, J.W.; Duvall, A.J.; Geis, A.D.

    1958-01-01

    A method is described for determining the general area of production of mourning doves that are shot during the hunting season. This procedure is based on the identification of racial characteristics that can be ascertained from samples of wings. The application and utility of this method were demonstrated with data gathered from doves shot in two important dove-hunting areas, Georgia and Texas. The Georgia data indicated that the early-season kill consisted largely of the race carolinensis, while later in the season marginella and the intermediate form made up most of the bag. The wing samples from Texas indicated that birds produced outside of southern Texas made up the largest part of the bag in southern Texas during the entire season. Doves produced west of the Appalachians contributed to both the Georgia and Texas kill.

  15. Information About Cost of Goods Produced and its Usefulness for Production Engineers - A Case of SME

    NASA Astrophysics Data System (ADS)

    Maruszewska, Ewa Wanda; Strojek-Filus, Marzena; Drábková, Zita

    2017-12-01

    The article stresses the consequences of simplifications implemented in the measurement process of goods produced that are of crucial importance to production engineers in SME. The authors show the variety of possibilities that might be used by financial employees together with probable outputs in terms of valuation distortions. Using the case study the authors emphasis the importance of close cooperation of production engineers with finance professionals as out-puts of finance departments consist an important input for decision-making process of production managers. Further-more, demonstrated deficiencies in terms of methods applicable in financial reporting for measurement of the value of goods produced indicate the need for incorporation more financial and non-financial data in the process of judgments about the final cost of goods produced as simplifications applied in SME distort financial information provided to production engineers.

  16. Method for forming biaxially textured articles by powder metallurgy

    DOEpatents

    Goyal, Amit; Williams, Robert K.; Kroeger, Donald M.

    2002-01-01

    A method of preparing a biaxially textured alloy article comprises the steps of preparing a mixture comprising Ni powder and at least one powder selected from the group consisting of Cr, W, V, Mo, Cu, Al, Ce, YSZ, Y, Rare Earths, (RE), MgO, CeO.sub.2, and Y.sub.2 O.sub.3 ; compacting the mixture, followed by heat treating and rapidly recrystallizing to produce a biaxial texture on the article. In some embodiments the alloy article further comprises electromagnetic or electro-optical devices and possesses superconducting properties.

  17. MEANS AND METHOD FOR PRODUCING A VACUUM

    DOEpatents

    Otavka, M.A.

    1960-08-01

    A new method is given for starting the operation of evapor-ion vacuum pumps. Ordinarily this type of pump is started by inducing an electric field with the vacuum chamber; however, by placing such an electric field in the chamber at the outset, a glow discharge may be initiated which is harmful to the pump. The procedure consists of using a negative electric field during which time only gettering action takes place; subsequently when the field reverses after a sufficient reduction of the number of gaseous particles in the chamber both gettering and ionizing takes place.

  18. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  19. High conductivity composite metal

    DOEpatents

    Zhou, R.; Smith, J.L.; Embury, J.D.

    1998-01-06

    Electrical conductors and methods of producing them are disclosed, where the conductors possess both high strength and high conductivity. Conductors are comprised of carbon steel and a material chosen from a group consisting of copper, nickel, silver, and gold. Diffusion barriers are placed between these two materials. The components of a conductor are assembled and then the assembly is subjected to heat treating and mechanical deformation steps. 10 figs.

  20. Multifunctional Thermal Structures Using Cellular Contact-Aided Complaint Mechanisms

    DTIC Science & Technology

    2016-10-31

    control . During this research effort, designs of increasing sophistication consistently outstripped the ability to fabricate them. Basic questions...using   non -­dimensional  models   In continuing design research , a topology optimization approach was crafted to maximize the thermal performance of the...methods could conceivably produce the elegant but complex material and geometric designs contemplated. Continued research is needed to improve the

  1. Multifunctional Thermal Structures Using Cellular Contract-Aided Complaint Mechanisms

    DTIC Science & Technology

    2017-01-26

    control . During this research effort, designs of increasing sophistication consistently outstripped the ability to fabricate them. Basic questions...using   non -­dimensional  models   In continuing design research , a topology optimization approach was crafted to maximize the thermal performance of the...methods could conceivably produce the elegant but complex material and geometric designs contemplated. Continued research is needed to improve the

  2. Continuous wasteless ecologically safe technology of propylenecarbonate production in presence of phthalocyanine catalysts

    DOEpatents

    Afanasiev, Vladimir Vasilievich [Moscow, RU; Zefirov, Nikolai Serafimovich [Moscow, RU; Zalepugin, Dmitry Yurievich [Moscow, RU; Polyakov, Victor Stanislavovich [Moscow, RU; Tilkunova, Nataliya Alexandrovna [Moscow, RU; Tomilova, Larisa Godvigovna [Moscow, RU

    2009-09-08

    A continuous method of producing propylenecarbonate includes carboxylation of propylene oxide with carbon dioxide in presence of phthalocyanine catalyst on an inert carrier, using as the phthalocyanine catalyst at least one catalyst selected from the group consisting of not-substituted, methyl, ethyl, butyl, and tret butyl-substituted phthalocyanines of metals, including those containing counterions, and using as the carrier a hydrophobic carrier.

  3. The structure and transformation of the nanomineral schwertmannite: a synthetic analog representative of field samples

    NASA Astrophysics Data System (ADS)

    French, Rebecca A.; Monsegue, Niven; Murayama, Mitsuhiro; Hochella, Michael F.

    2014-04-01

    The phase transformation of schwertmannite, an iron oxyhydroxide sulfate nanomineral synthesized at room temperature and at 75 °C using H2O2 to drive the precipitation of schwertmannite from ferrous sulfate (Regenspurg et al. in Geochim Cosmochim Acta 68:1185-1197, 2004), was studied using high-resolution transmission electron microscopy. The results of this study suggest that schwertmannite synthesized using this method should not be described as a single phase with a repeating unit cell, but as a polyphasic nanomineral with crystalline areas spanning less than a few nanometers in diameter, within a characteristic `pin-cushion'-like amorphous matrix. The difference in synthesis temperature affected the density of the needles on the schwertmannite surface. The needles on the higher-temperature schwertmannite displayed a dendritic morphology, whereas the needles on the room-temperature schwertmannite were more closely packed. Visible lattice fringes in the schwertmannite samples are consistent with the powder X-ray diffraction (XRD) pattern taken on the bulk schwertmannite and also matched d-spacings for goethite, indicating a close structural relationship between schwertmannite and goethite. The incomplete transformation from schwertmannite to goethite over 24 h at 75 °C was tracked using XRD and TEM. TEM images suggest that the sample collected after 24 h consists of aggregates of goethite nanocrystals. Comparing the synthetic schwertmannite in this study to a study on schwertmannite produced at 85 °C, which used ferric sulfate, reveals that synthesis conditions can result in significant differences in needle crystal structure. The bulk powder XRD patterns for the schwertmannite produced using these two samples were indistinguishable from one another. Future studies using synthetic schwertmannite should account for these differences when determining schwertmannite's structure, reactivity, and capacity to take up elements like arsenic. The schwertmannite synthesized by the Regenspurg et al. method produces a mineral that is consistent with the structure and morphology of natural schwertmannite observed in our previous study using XRD and TEM, making this an ideal synthetic method for laboratory-based mineralogical and geochemical studies that intend to be environmentally relevant.

  4. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  5. Beam Characterization at the Neutron Radiography Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarah Morgan; Jeffrey King

    The quality of a neutron imaging beam directly impacts the quality of radiographic images produced using that beam. Fully characterizing a neutron beam, including determination of the beam’s effective length-to-diameter ratio, neutron flux profile, energy spectrum, image quality, and beam divergence, is vital for producing quality radiographic images. This project characterized the east neutron imaging beamline at the Idaho National Laboratory Neutron Radiography Reactor (NRAD). The experiments which measured the beam’s effective length-to-diameter ratio and image quality are based on American Society for Testing and Materials (ASTM) standards. An analysis of the image produced by a calibrated phantom measured themore » beam divergence. The energy spectrum measurements consist of a series of foil irradiations using a selection of activation foils, compared to the results produced by a Monte Carlo n-Particle (MCNP) model of the beamline. Improvement of the existing NRAD MCNP beamline model includes validation of the model’s energy spectrum and the development of enhanced image simulation methods. The image simulation methods predict the radiographic image of an object based on the foil reaction rate data obtained by placing a model of the object in front of the image plane in an MCNP beamline model.« less

  6. Methods for estimating population coverage of mass distribution programmes: a review of practices in relation to trachoma control.

    PubMed

    Cromwell, Elizabeth A; Ngondi, Jeremiah; McFarland, Deborah; King, Jonathan D; Emerson, Paul M

    2012-10-01

    In the context of trachoma control, population coverage with mass drug administration (MDA) using antibiotics is measured using routine data. Due to the limitations of administrative records as well as the potential for bias from incomplete or incorrect records, a literature review of coverage survey methods applied in neglected tropical disease control programmes and immunisation outreach was conducted to inform the design of coverage surveys for trachoma control. Several methods were identified, including the '30 × 7' survey method for the Expanded Programme on Immunization (EPI 30×7), other cluster random sampling (CRS) methods, lot quality assurance sampling (LQAS), purposive sampling and routine data. When compared against one another, the EPI and other CRS methods produced similar population coverage estimates, whilst LQAS, purposive sampling and use of administrative data did not generate estimates consistent with CRS. In conclusion, CRS methods present a consistent approach for MDA coverage surveys despite different methods of household selection. They merit use until standard guidelines are available. CRS methods should be used to verify population coverage derived from LQAS, purposive sampling methods and administrative reports. Copyright © 2012 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.

  7. Matrix method for acoustic levitation simulation.

    PubMed

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  8. The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter

    2017-01-01

    Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.

  9. Microfluidic model of the platelet-generating organ: beyond bone marrow biomimetics

    PubMed Central

    Blin, Antoine; Le Goff, Anne; Magniez, Aurélie; Poirault-Chassac, Sonia; Teste, Bruno; Sicot, Géraldine; Nguyen, Kim Anh; Hamdi, Feriel S.; Reyssat, Mathilde; Baruch, Dominique

    2016-01-01

    We present a new, rapid method for producing blood platelets in vitro from cultured megakaryocytes based on a microfluidic device. This device consists in a wide array of VWF-coated micropillars. Such pillars act as anchors on megakaryocytes, allowing them to remain trapped in the device and subjected to hydrodynamic shear. The combined effect of anchoring and shear induces the elongation of megakaryocytes and finally their rupture into platelets and proplatelets. This process was observed with megakaryocytes from different origins and found to be robust. This original bioreactor design allows to process megakaryocytes at high throughput (millions per hour). Since platelets are produced in such a large amount, their extensive biological characterisation is possible and shows that platelets produced in this bioreactor are functional. PMID:26898346

  10. Method for measuring and controlling beam current in ion beam processing

    DOEpatents

    Kearney, Patrick A.; Burkhart, Scott C.

    2003-04-29

    A method for producing film thickness control of ion beam sputter deposition films. Great improvements in film thickness control is accomplished by keeping the total current supplied to both the beam and suppressor grids of a radio frequency (RF) in beam source constant, rather than just the current supplied to the beam grid. By controlling both currents, using this method, deposition rates are more stable, and this allows the deposition of layers with extremely well controlled thicknesses to about 0.1%. The method is carried out by calculating deposition rates based on the total of the suppressor and beam currents and maintaining the total current constant by adjusting RF power which gives more consistent values.

  11. A new standardized method for quantification of humic and fulvic acids in humic ores and commercial products.

    PubMed

    Lamar, Richard T; Olk, Daniel C; Mayhew, Lawrence; Bloom, Paul R

    2014-01-01

    Increased use of humic substances in agriculture has generated intense interest among producers, consumers, and regulators for an accurate and reliable method to quantify humic acid (HA) and fulvic acid (FA) in raw ores and products. Here we present a thoroughly validated method, the new standardized method for determination of HA and FA contents in raw humate ores and in solid and liquid products produced from them. The methods used for preparation of HA and FA were adapted according to the guidelines of the International Humic Substances Society involving alkaline extraction followed by acidification to separate HA from the fulvic fraction. This is followed by separation of FA from the fulvic fraction by adsorption on a nonionic macroporous acrylic ester resin at acid pH. It differs from previous methods in that it determines HA and FA concentrations gravimetrically on an ash-free basis. Critical steps in the method, e.g., initial test portion mass, test portion to extract volume ratio, extraction time, and acidification of alkaline extract, were optimized for maximum and consistent recovery of HA and FA. The method detection limits for HA and FA were 4.62 and 4.8 mg/L, respectively. The method quantitation limits for HA and FA were 14.7 and 15.3 mg/L, respectively.

  12. Identification and Remediation of Phonological and Motor Errors in Acquired Sound Production Impairment

    PubMed Central

    Gagnon, Bernadine; Miozzo, Michele

    2017-01-01

    Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044

  13. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  14. Training in paediatric clinical pharmacology in the UK

    PubMed Central

    Choonara, Imti; Dewit, Odile; Harrop, Emily; Howarth, Sheila; Helms, Peter; Kanabar, Dipak; Lenney, Warren; Rylance, George; Vallance, Patrick

    2004-01-01

    Aims To produce a training programme in paediatric clinical pharmacology. Methods A working group, consisting of clinical pharmacologists (paediatric and adult), general paediatricians and the pharmaceutical industry was established to produce the training programme. Results Following a two year training programme in general paediatrics, a three year training programme in clinical pharmacology has been established. This includes one year of research in clinical pharmacology (paediatric or adult). The other two years involve training in different aspects of paediatric clinical pharmacology and general paediatrics. Conclusion The existence of a formal training programme should result in a significant increase in the number of paediatric clinical pharmacologists. PMID:15255806

  15. Pulsed laser diffusion of thin hole-barrier contacts in high purity germanium for gamma radiation detectors

    NASA Astrophysics Data System (ADS)

    Maggioni, G.; Carturan, S.; Raniero, W.; Riccetto, S.; Sgarbossa, F.; Boldrini, V.; Milazzo, R.; Napoli, D. R.; Scarpa, D.; Andrighetto, A.; Napolitani, E.; De Salvador, D.

    2018-03-01

    A new method for the formation of hole-barrier contacts in high purity germanium (HPGe) is described, which consists in the sputter deposition of a Sb film on HPGe, followed by Sb diffusion produced through laser annealing of the Ge surface in the melting regime. This process gives rise to a very thin ( ≤ 100 nm) n-doped layer, as determined by SIMS measurement, while preserving the defect-free morphology of HPGe surface. A small prototype of gamma ray detector with a Sb laser-diffused contact was produced and characterized, showing low leakage currents and good spectroscopy data with different gamma ray sources.

  16. Consistent initial conditions for the Saint-Venant equations in river network modeling

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Wei; Liu, Frank; Hodges, Ben R.

    2017-09-01

    Initial conditions for flows and depths (cross-sectional areas) throughout a river network are required for any time-marching (unsteady) solution of the one-dimensional (1-D) hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths). These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1) the pseudo time-marching method (PTM) that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2) the steady-solution method (SSM) that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  17. Assessing Data Quality in Emergent Domains of Earth Sciences

    NASA Astrophysics Data System (ADS)

    Darch, P. T.; Borgman, C.

    2016-12-01

    As earth scientists seek to study known phenomena in new ways, and to study new phenomena, they often develop new technologies and new methods such as embedded network sensing, or reapply extant technologies, such as seafloor drilling. Emergent domains are often highly multidisciplinary as researchers from many backgrounds converge on new research questions. They may adapt existing methods, or develop methods de novo. As a result, emerging domains tend to be methodologically heterogeneous. As these domains mature, pressure to standardize methods increases. Standardization promotes trust, reliability, accuracy, and reproducibility, and simplifies data management. However, for standardization to occur, researchers must be able to assess which of the competing methods produces the highest quality data. The exploratory nature of emerging domains discourages standardization. Because competing methods originate in different disciplinary backgrounds, their scientific credibility is difficult to compare. Instead of direct comparison, researchers attempt to conduct meta-analyses. Scientists compare datasets produced by different methods to assess their consistency and efficiency. This paper presents findings from a long-term qualitative case study of research on the deep subseafloor biosphere, an emergent domain. A diverse community converged on the study of microbes in the seafloor and those microbes' interactions with the physical environments they inhabit. Data on this problem are scarce, leading to calls for standardization as a means to acquire and analyze greater volumes of data. Lacking consistent methods, scientists attempted to conduct meta-analyses to determine the most promising methods on which to standardize. Among the factors that inhibited meta-analyses were disparate approaches to metadata and to curating data. Datasets may be deposited in a variety of databases or kept on individual scientists' servers. Associated metadata may be inconsistent or hard to interpret. Incentive structures, including prospects for journal publication, often favor new data over reanalyzing extant datasets. Assessing data quality in emergent domains is extremely difficult and will require adaptations in infrastructure, culture, and incentives.

  18. Evolution of Mobil`s methods to evaluate exploration and producing opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaynor, C.B.; Cook, D.M. Jr.

    1996-08-01

    Over the past decade, Mobil has changed significantly in size, structure and focus to improve profitability. Concurrently, work processes and methodologies have been modified to improve resource utilization and opportunity selection. The key imperative has been recognition of the full range of hydrocarbon volume uncertainty, its risk and value. Exploration has focussed on increasing success through improved geotechnical estimates and demonstrating value addition. For Producing, the important tasks: (1) A centralized Exploration and Producing team was formed to help ensure an integrated, consistent worldwide approach to prospect and field assessments. Monte Carlo simulation was instituted to recognize probability-weighted ranges ofmore » possible outcomes for prospects and fields, and hydrocarbon volume category definitions were standardized. (2) Exploration instituted a global Prospect Inventory, tracking wildcat predictions vs. results. Performance analyses led to initiatives to improve the quality and consistency of assessments. Process improvement efforts included the use of multidisciplinary teams and peer reviews. Continued overestimates of hydrocarbon volumes prompted methodology changes such as the use of {open_quotes}reality checks{close_quotes} and log-normal distributions. The communication of value predictions and additions became paramount. (3) Producing now recognizes the need for Exploration`s commercial discoveries and new Producing ventures, notwithstanding the associated risk. Multi-disciplinary teams of engineers and geoscientists work on post-discovery assessments to optimize field development and maximize the value of opportunities. Mobil now integrates volume and risk assessment with correlative future capital investment programs to make proactive strategic choices to maximize shareholder value.« less

  19. A novel endophytic Huperzine A-producing fungus, Shiraia sp. Slf14, isolated from Huperzia serrata.

    PubMed

    Zhu, D; Wang, J; Zeng, Q; Zhang, Z; Yan, R

    2010-10-01

    To characterize and identify a novel Huperzine A (HupA)-producing fungal strain Slf14 isolated from Huperzia serrata (Thunb. ex Murray) Trev. in China. The isolation, identification and characterization of a novel endophytic fungus producing HupA specifically and consistently from the leaves of H. serrata were investigated. The fungus was identified as Shiraia sp. Slf14 by molecular and morphological methods. The HupA produced by this endophytic fungus was shown to be identical to authentic HupA analysed by thin layer chromatographic, High-performance liquid chromatography (HPLC), LC-MS, (1) H NMR and acetylcholinesterase (AChE) inhibition activity in vitro. The amount of HupA produced by Shiraia sp. Slf14 was quantified to be 327.8 μg l(-1) by HPLC, which was far higher than that of the reported endophytic fungi, Acremonium sp., Blastomyces sp. and Botrytis sp. The production of HupA by endophyte Shiraia sp. Slf14 is an enigmatic observation. It would be interesting to further study the HupA production and regulation by the cultured endophyte in H. serrata and in axenic cultures. Although the current accumulation of HupA by the endophyte is not very high, it could provide a promising alterative approach for large-scale production of HupA. However, further strain improvement and the fermentation process optimization are required to result in the consistent and dependable production. © 2010 The Authors. Journal compilation © 2010 The Society for Applied Microbiology.

  20. Efficient production of recombinant adeno-associated viral vector, serotype DJ/8, carrying the GFP gene.

    PubMed

    Hashimoto, Haruo; Mizushima, Tomoko; Chijiwa, Tsuyoshi; Nakamura, Masato; Suemizu, Hiroshi

    2017-06-15

    The purpose of this study was to establish an efficient method for the preparation of an adeno-associated viral (AAV), serotype DJ/8, carrying the GFP gene (AAV-DJ/8-GFP). We compared the yields of AAV-DJ/8 vector, which were produced by three different combination methods, consisting of two plasmid DNA transfection methods (lipofectamine and calcium phosphate co-precipitation; CaPi) and two virus DNA purification methods (iodixanol and cesium chloride; CsCl). The results showed that the highest yield of AAV-DJ/8-GFP vector was accomplished with the combination method of lipofectamine transfection and iodixanol purification. The viral protein expression levels and the transduction efficacy in HEK293 and CHO cells were not different among four different combination methods for AAV-DJ/8-GFP vectors. We confirmed that the AAV-DJ/8-GFP vector could transduce to human and murine hepatocyte-derived cell lines. These results show that AAV-DJ/8-GFP, purified by the combination of lipofectamine and iodixanol, produces an efficient yield without altering the characteristics of protein expression and AAV gene transduction. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Dual initiation strip charge apparatus and methods for making and implementing the same

    DOEpatents

    Jakaboski, Juan-Carlos [Albuquerque, NM; Todd,; Steven, N [Rio Rancho, NM; Polisar, Stephen [Albuquerque, NM; Hughs, Chance [Tijeras, NM

    2011-03-22

    A Dual Initiation Strip Charge (DISC) apparatus is initiated by a single initiation source and detonates a strip of explosive charge at two separate contacts. The reflection of explosively induced stresses meet and create a fracture and breach a target along a generally single fracture contour and produce generally fragment-free scattering and no spallation. Methods for making and implementing a DISC apparatus provide numerous advantages over previous methods of creating explosive charges by utilizing steps for rapid prototyping; by implementing efficient steps and designs for metering consistent, repeatable, and controlled amount of high explosive; and by utilizing readily available materials.

  2. Microbially-mediated method for synthesis of non-oxide semiconductor nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phelps, Tommy J.; Lauf, Robert J.; Moon, Ji-Won

    The invention is directed to a method for producing non-oxide semiconductor nanoparticles, the method comprising: (a) subjecting a combination of reaction components to conditions conducive to microbially-mediated formation of non-oxide semiconductor nanoparticles, wherein said combination of reaction components comprises i) anaerobic microbes, ii) a culture medium suitable for sustaining said anaerobic microbes, iii) a metal component comprising at least one type of metal ion, iv) a non-metal component comprising at least one non-metal selected from the group consisting of S, Se, Te, and As, and v) one or more electron donors that provide donatable electrons to said anaerobic microbes duringmore » consumption of the electron donor by said anaerobic microbes; and (b) isolating said non-oxide semiconductor nanoparticles, which contain at least one of said metal ions and at least one of said non-metals. The invention is also directed to non-oxide semiconductor nanoparticle compositions produced as above and having distinctive properties.« less

  3. Mapping thunder sources by inverting acoustic and electromagnetic observations

    NASA Astrophysics Data System (ADS)

    Anderson, J. F.; Johnson, J. B.; Arechiga, R. O.; Thomas, R. J.

    2014-12-01

    We present a new method of locating current flow in lightning strikes by inversion of thunder recordings constrained by Lightning Mapping Array observations. First, radio frequency (RF) pulses are connected to reconstruct conductive channels created by leaders. Then, acoustic signals that would be produced by current flow through each channel are forward modeled. The recorded thunder is considered to consist of a weighted superposition of these acoustic signals. We calculate the posterior distribution of acoustic source energy for each channel with a Markov Chain Monte Carlo inversion that fits power envelopes of modeled and recorded thunder; these results show which parts of the flash carry current and produce thunder. We examine the effects of RF pulse location imprecision and atmospheric winds on quality of results and apply this method to several lightning flashes over the Magdalena Mountains in New Mexico, USA. This method will enable more detailed study of lightning phenomena by allowing researchers to map current flow in addition to leader propagation.

  4. The Chameleon Effect: characterization challenges due to the variability of nanoparticles and their surfaces of nanoparticles and their surfaces

    NASA Astrophysics Data System (ADS)

    Baer, Donald R.

    2018-05-01

    Nanoparticles in a variety of forms are increasing important in fundamental research, technological and medical applications, and environmental or toxicology studies. Physical and chemical drivers that lead to multiple types of particle instabilities complicate both the ability to produce, appropriately characterize, and consistently deliver well-defined particles, frequently leading to inconsistencies and conflicts in the published literature. This perspective suggests that provenance information, beyond that often recorded or reported, and application of a set of core characterization methods, including a surface sensitive technique, consistently applied at critical times can serve as tools in the effort minimize reproducibility issues.

  5. Dataset of anomalies and malicious acts in a cyber-physical subsystem.

    PubMed

    Laso, Pedro Merino; Brosset, David; Puentes, John

    2017-10-01

    This article presents a dataset produced to investigate how data and information quality estimations enable to detect aNomalies and malicious acts in cyber-physical systems. Data were acquired making use of a cyber-physical subsystem consisting of liquid containers for fuel or water, along with its automated control and data acquisition infrastructure. Described data consist of temporal series representing five operational scenarios - Normal, aNomalies, breakdown, sabotages, and cyber-attacks - corresponding to 15 different real situations. The dataset is publicly available in the .zip file published with the article, to investigate and compare faulty operation detection and characterization methods for cyber-physical systems.

  6. Consistency and Enhancement Processes in Understanding Emotions

    ERIC Educational Resources Information Center

    Stets, Jan E.; Asencio, Emily K.

    2008-01-01

    Many theories in the sociology of emotions assume that emotions emerge from the cognitive consistency principle. Congruence among cognitions produces good feelings whereas incongruence produces bad feelings. A work situation is simulated in which managers give feedback to workers that is consistent or inconsistent with what the workers expect to…

  7. Developing Carbon Nanotube Standards at NASA

    NASA Technical Reports Server (NTRS)

    Nikolaev, Pasha; Arepalli, Sivaram; Sosa, Edward; Gorelik, Olga; Yowell, Leonard

    2007-01-01

    Single wall carbon nanotubes (SWCNTs) are currently being produced and processed by several methods. Many researchers are continuously modifying existing methods and developing new methods to incorporate carbon nanotubes into other materials and utilize the phenomenal properties of SWCNTs. These applications require availability of SWCNTs with known properties and there is a need to characterize these materials in a consistent manner. In order to monitor such progress, it is critical to establish a means by which to define the quality of SWCNT material and develop characterization standards to evaluate of nanotube quality across the board. Such characterization standards should be applicable to as-produced materials as well as processed SWCNT materials. In order to address this issue, NASA Johnson Space Center has developed a protocol for purity and dispersion characterization of SWCNTs (Ref.1). The NASA JSC group is currently working with NIST, ANSI and ISO to establish purity and dispersion standards for SWCNT material. A practice guide for nanotube characterization is being developed in cooperation with NIST (Ref.2). Furthermore, work is in progress to incorporate additional characterization methods for electrical, mechanical, thermal, optical and other properties of SWCNTs.

  8. Developing Carbon Nanotube Standards at NASA

    NASA Technical Reports Server (NTRS)

    Nikolaev, Pasha; Arepalli, Sivaram; Sosa, Edward; Gorelik, Olga; Yowell, Leonard

    2007-01-01

    Single wall carbon nanotubes (SWCNTs) are currently being produced and processed by several methods. Many researchers are continuously modifying existing methods and developing new methods to incorporate carbon nanotubes into other materials and utilize the phenomenal properties of SWCNTs. These applications require availability of SWCNTs with known properties and there is a need to characterize these materials in a consistent manner. In order to monitor such progress, it is critical to establish a means by which to define the quality of SWCNT material and develop characterization standards to evaluate of nanotube quality across the board. Such characterization standards should be applicable to as-produced materials as well as processed SWCNT materials. In order to address this issue, NASA Johnson Space Center has developed a protocol for purity and dispersion characterization of SWCNTs. The NASA JSC group is currently working with NIST, ANSI and ISO to establish purity and dispersion standards for SWCNT material. A practice guide for nanotube characterization is being developed in cooperation with NIST. Furthermore, work is in progress to incorporate additional characterization methods for electrical, mechanical, thermal, optical and other properties of SWCNTs.

  9. Methods for degrading lignocellulosic materials

    DOEpatents

    Vlasenko, Elena [Davis, CA; Cherry, Joel [Davis, CA; Xu, Feng [Davis, CA

    2008-04-08

    The present invention relates to methods for degrading a lignocellulosic material, comprising: treating the lignocellulosic material with an effective amount of one or more cellulolytic enzymes in the presence of at least one surfactant selected from the group consisting of a secondary alcohol ethoxylate, fatty alcohol ethoxylate, nonylphenol ethoxylate, tridecyl ethoxylate, and polyoxyethylene ether, wherein the presence of the surfactant increases the degradation of lignocellulosic material compared to the absence of the surfactant. The present invention also relates to methods for producing an organic substance, comprising: (a) saccharifying a lignocellulosic material with an effective amount of one or more cellulolytic enzymes in the presence of at least one surfactant selected from the group consisting of a secondary alcohol ethoxylate, fatty alcohol ethoxylate, nonylphenol ethoxylate, tridecyl ethoxylate, and polyoxyethylene ether, wherein the presence of the surfactant increases the degradation of lignocellulosic material compared to the absence of the surfactant; (b) fermenting the saccharified lignocellulosic material of step (a) with one or more fermentating microoganisms; and (c) recovering the organic substance from the fermentation.

  10. Methods for degrading lignocellulosic materials

    DOEpatents

    Vlasenko, Elena [Davis, CA; Cherry, Joel [Davis, CA; Xu, Feng [Davis, CA

    2011-05-17

    The present invention relates to methods for degrading a lignocellulosic material, comprising: treating the lignocellulosic material with an effective amount of one or more cellulolytic enzymes in the presence of at least one surfactant selected from the group consisting of a secondary alcohol ethoxylate, fatty alcohol ethoxylate, nonylphenol ethoxylate, tridecyl ethoxylate, and polyoxyethylene ether, wherein the presence of the surfactant increases the degradation of lignocellulosic material compared to the absence of the surfactant. The present invention also relates to methods for producing an organic substance, comprising: (a) saccharifying a lignocellulosic material with an effective amount of one or more cellulolytic enzymes in the presence of at least one surfactant selected from the group consisting of a secondary alcohol ethoxylate, fatty alcohol ethoxylate, nonylphenol ethoxylate, tridecyl ethoxylate, and polyoxyethylene ether, wherein the presence of the surfactant increases the degradation of lignocellulosic material compared to the absence of the surfactant; (b) fermenting the saccharified lignocellulosic material of step (a) with one or more fermenting microorganisms; and (c) recovering the organic substance from the fermentation.

  11. Electrolytic method for the production of lithium using a lithium-amalgam electrode

    DOEpatents

    Cooper, John F.; Krikorian, Oscar H.; Homsy, Robert V.

    1979-01-01

    A method for recovering lithium from its molten amalgam by electrolysis of the amalgam in an electrolytic cell containing as a molten electrolyte a fused-salt consisting essentially of a mixture of two or more alkali metal halides, preferably alkali metal halides selected from lithium iodide, lithium chloride, potassium iodide and potassium chloride. A particularly suitable molten electrolyte is a fused-salt consisting essentially of a mixture of at least three components obtained by modifying an eutectic mixture of LiI-KI by the addition of a minor amount of one or more alkali metal halides. The lithium-amalgam fused-salt cell may be used in an electrolytic system for recovering lithium from an aqueous solution of a lithium compound, wherein electrolysis of the aqueous solution in an aqueous cell in the presence of a mercury cathode produces a lithium amalgam. The present method is particularly useful for the regeneration of lithium from the aqueous reaction products of a lithium-water-air battery.

  12. Methods for degrading lignocellulosic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlasenko, Elena; Cherry, Joel; Xu, Feng

    2008-04-08

    The present invention relates to methods for degrading a lignocellulosic material, comprising: treating the lignocellulosic material with an effective amount of one or more cellulolytic enzymes in the presence of at least one surfactant selected from the group consisting of a secondary alcohol ethoxylate, fatty alcohol ethoxylate, nonylphenol ethoxylate, tridecyl ethoxylate, and polyoxyethylene ether, wherein the presence of the surfactant increases the degradation of lignocellulosic material compared to the absence of the surfactant. The present invention also relates to methods for producing an organic substance, comprising: (a) saccharifying a lignocellulosic material with an effective amount of one or more cellulolyticmore » enzymes in the presence of at least one surfactant selected from the group consisting of a secondary alcohol ethoxylate, fatty alcohol ethoxylate, nonylphenol ethoxylate, tridecyl ethoxylate, and polyoxyethylene ether, wherein the presence of the surfactant increases the degradation of lignocellulosic material compared to the absence of the surfactant; (b) fermenting the saccharified lignocellulosic material of step (a) with one or more fermentating microoganisms; and (c) recovering the organic substance from the fermentation.« less

  13. Capabilities of the Large-Scale Sediment Transport Facility

    DTIC Science & Technology

    2016-04-01

    experiments in wave /current environments. INTRODUCTION: The LSTF (Figure 1) is a large-scale laboratory facility capable of simulating conditions...comparable to low- wave energy coasts. The facility was constructed to address deficiencies in existing methods for calculating longshore sediment...transport. The LSTF consists of a 30 m wide, 50 m long, 1.4 m deep basin. Waves are generated by four digitally controlled wave makers capable of producing

  14. QUALITY CONTROL OF PHARMACEUTICALS.

    PubMed

    LEVI, L; WALKER, G C; PUGSLEY, L I

    1964-10-10

    Quality control is an essential operation of the pharmaceutical industry. Drugs must be marketed as safe and therapeutically active formulations whose performance is consistent and predictable. New and better medicinal agents are being produced at an accelerated rate. At the same time more exacting and sophisticated analytical methods are being developed for their evaluation. Requirements governing the quality control of pharmaceuticals in accordance with the Canadian Food and Drugs Act are cited and discussed.

  15. Dynamic mode decomposition for plasma diagnostics and validation.

    PubMed

    Taylor, Roy; Kutz, J Nathan; Morgan, Kyle; Nelson, Brian A

    2018-05-01

    We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.

  16. Dynamic mode decomposition for plasma diagnostics and validation

    NASA Astrophysics Data System (ADS)

    Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.

    2018-05-01

    We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.

  17. Formulation design of taste-masked particles, including famotidine, for an oral fast-disintegrating dosage form.

    PubMed

    Mizumoto, Takao; Tamura, Tetsuya; Kawai, Hitoshi; Kajiyama, Atsushi; Itai, Shigeru

    2008-04-01

    In this study, the taste-masking of famotidine, which could apply to any fast-disintegrating tablet, was investigated using the spray-dry method. The target characteristics of taste-masked particles were set as follows: the dissolution rate is not to be more than 30% at 1 min and not less than 85% at 15 min, and the particle size is not to be more than 150 microm in diameter to avoid a gritty feeling in the mouth. The target dissolution profiles of spray-dried particles consisting of Aquacoat ECD30 and Eudragit NE30D or triacetin was accomplished by the screening of formulas and the appropriate lab-scale manufacturing conditions. Lab-scale testing produced taste-masked particles that met the formulation targets. On the pilot scale, spray-dried particles with attributes, such as dissolution rate and particle size, of the same quality were produced, and reproducibility was also confirmed. This confirmed that the spray-dry method produced the most appropriate taste-masked particles for fast-disintegrating dosage forms.

  18. Method and product for phosphosilicate slurry for use in dentistry and related bone cements

    DOEpatents

    Wagh, Arun S.; Primus, Carolyn

    2006-08-01

    The present invention is directed to magnesium phosphate ceramics and their methods of manufacture. The composition of the invention is produced by combining a mixture of a substantially dry powder component with a liquid component. The substantially dry powder component comprises a sparsely soluble oxide powder, an alkali metal phosphate powder, a sparsely soluble silicate powder, with the balance of the substantially dry powder component comprising at least one powder selected from the group consisting of bioactive powders, biocompatible powders, fluorescent powders, fluoride releasing powders, and radiopaque powders. The liquid component comprises a pH modifying agent, a monovalent alkali metal phosphate in aqueous solution, the balance of the liquid component being water. The use of calcined magnesium oxide as the oxide powder and hydroxylapatite as the bioactive powder produces a self-setting ceramic that is particularly suited for use in dental and orthopedic applications.

  19. Method for magnesium sulfate recovery

    DOEpatents

    Gay, Richard L.; Grantham, LeRoy F.

    1987-01-01

    A method of obtaining magnesium sulfate substantially free from radioactive uranium from a slag containing the same and having a radioactivity level of at least about 7000 pCi/gm. The slag is ground to a particle size of about 200 microns or less. The ground slag is then contacted with a concentrated sulfuric acid under certain prescribed conditions to produce a liquid product and a solid product. The particulate solid product and a minor amount of the liquid is then treated to produce a solid residue consisting essentially of magnesium sulfate substantially free of uranium and having a residual radioactivity level of less than 1000 pCi/gm. In accordance with the preferred embodiment of the invention, a catalyst and an oxidizing agent are used during the initial acid treatment and a final solid residue has a radioactivity level of less than about 50 pCi/gm.

  20. Magnesium fluoride recovery method

    DOEpatents

    Gay, Richard L.; McKenzie, Donald E.

    1989-01-01

    A method of obtaining magnesium fluoride substantially free from radioactive uranium from a slag containing the same and having a radioactivity level of at least about 7000 pCi/gm. The slag is ground to a particle size of about 200 microns or less. The ground slag is contacted with an acid under certain prescribed conditions to produce a liquid product and a particulate solid product. The particulate solid product is separated from the liquid and treated at least two more times with acid to produce a solid residue consisting essentially of magnesium fluoride substantially free of uranium and having a residual radioactivity level of less than about 1000 pCi/gm. In accordance with a particularly preferred embodiment of the invention a catalyst and an oxidizing agent are used during the acid treatment and preferably the acid is sulfuric acid having a strength of about 1.0 Normal.

  1. Method for magnesium sulfate recovery

    DOEpatents

    Gay, R.L.; Grantham, L.F.

    1987-08-25

    A method is described for obtaining magnesium sulfate substantially free from radioactive uranium from a slag containing the same and having a radioactivity level of at least about 7,000 pCi/gm. The slag is ground to a particle size of about 200 microns or less. The ground slag is then contacted with a concentrated sulfuric acid under certain prescribed conditions to produce a liquid product and a solid product. The particulate solid product and a minor amount of the liquid is then treated to produce a solid residue consisting essentially of magnesium sulfate substantially free of uranium and having a residual radioactivity level of less than 1,000 pCi/gm. In accordance with the preferred embodiment of the invention, a catalyst and an oxidizing agent are used during the initial acid treatment and a final solid residue has a radioactivity level of less than about 50 pCi/gm.

  2. Developing a vacuum cooking equipment prototype to produce strawberry jam and optimization of vacuum cooking conditions.

    PubMed

    Okut, Dilara; Devseren, Esra; Koç, Mehmet; Ocak, Özgül Özdestan; Karataş, Haluk; Kaymak-Ertekin, Figen

    2018-01-01

    Purpose of this study was to develop prototype cooking equipment that can work at reduced pressure and to evaluate its performance for production of strawberry jam. The effect of vacuum cooking conditions on color soluble solid content, reducing sugars total sugars HMF and sensory properties were investigated. Also, the optimum vacuum cooking conditions for strawberry jam were optimized for Composite Rotatable Design. The optimum cooking temperature and time were determined targeting maximum soluble solid content and sensory attributes (consistency) and minimum Hue value and HMF content. The optimum vacuum cooking conditions determined were 74.4 °C temperature and 19.8 time. The soluble solid content strawberry jam made by vacuum process were similar to those prepared by traditional method. HMF contents of jams produced with vacuum cooking method were well within limit of standards.

  3. Finding imaging patterns of structural covariance via Non-Negative Matrix Factorization.

    PubMed

    Sotiras, Aristeidis; Resnick, Susan M; Davatzikos, Christos

    2015-03-01

    In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Archaeometric study on minting dies produced under papal rule in Ferrara

    NASA Astrophysics Data System (ADS)

    Monticelli, Cecilia; Balbo, Andrea; Vaccaro, Carmela; Gulinelli, Maria Teresa; Garagnani, Gian Luca

    2013-12-01

    In the Civic Museum of Palazzo Schifanoia in Ferrara, a collection of 1104 coin striking tools is stored. Among these, eight steel dies produced from the 2nd decade of the seventeenth to the half of the eighteenth century, representative of the whole period of activity of the papal mint in Ferrara, have been chosen and studied. In that period, while important innovations in the coin minting technique were introduced in Europe, Ferrara declined from the rank of ducal mint to that of peripheral minting center of the highly centralized Papal States. The dies have been characterized by metallographic, chemical, and microhardness investigations. The results suggest that the dies were obtained by a manual smithing technique consisting in hammer hot forging. The die quality improved with time. In fact, in the period 1619-1622, a hardening treatment for the engraved die end consisting in a simple local carburization coexisted with a more efficient production method, based on the application of a proper final heat treatment. This treatment induced a graded microstructure from the engraved end, with a hard martensitic or bainitic structure, to the opposite end, with a tough ferritic/pearlitic structure. From 1675 onward, the latter production method was applied on all the studied dies. The chemical analysis of the alloys suggest that they were likely obtained from iron ores with a common provenance, while the analysis of the slag inclusions suggests the adoption of a direct method of ironmaking throughout the activity period of the mint.

  5. Winter bird population studies and project prairie birds for surveying grassland birds

    USGS Publications Warehouse

    Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.

    2008-01-01

    We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.

  6. [Which one is more important, raw materials or productive technology?--a case study for quality consistency control of Gegen Qinlian decoction].

    PubMed

    Zhong, Wen; Chen, Sha; Zhang, Jun; Wang, Yu-Sheng; Liu, An

    2016-03-01

    To investigate the effect of Chinese medicine raw materials and production technology on quality consistency of Chinese patent medicines with Gegen Qinlian decoction as an example, and establish a suitable method for the quality consistency control of Chinese patent medicines. The results showed that the effect of production technology on the quality consistency was generally not more than 5%, while the effect of raw materials was even more than 30%, indicating that the effect of raw materials was much greater than that of the production technology. In this study, blend technology was used to improve the quality consistency of raw materials. As a result, the difference between the product produced by raw materials and reference groups was less than 5%, thus increasing the quality consistence of finished products. The results showed that under the current circumstances, the main factor affecting the quality consistency of Chinese patent medicines was raw materials, so we shall pay more attention to the quality of Chinese medicine's raw materials. Finally, a blend technology can improve the quality consistency of Chinese patent medicines. Copyright© by the Chinese Pharmaceutical Association.

  7. The Influence of Unsteadiness on the Analysis of Pressure Gain Combustion Devices

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Tom

    2013-01-01

    Pressure gain combustion (PGC) has been the object of scientific study for over a century due to its promise of improved thermodynamic efficiency. In many recent application concepts PGC is utilized as a component in an otherwise continuous, normally steady flow system, such as a gas turbine or ram jet engine. However, PGC is inherently unsteady. Failure to account for the effects of this periodic unsteadiness can lead to misunderstanding and errors in performance calculations. This paper seeks to provide some clarity by presenting a consistent method of thermodynamic cycle analysis for a device utilizing PGC technology. The incorporation of the unsteady PGC process into the conservation equations for a continuous flow device is presented. Most importantly, the appropriate method for computing the conservation of momentum is presented. It will be shown that proper, consistent analysis of cyclic conservation principles produces representative performance predictions.

  8. Investigation of factors affecting the heater wire method of calibrating fine wire thermocouples

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.

    1972-01-01

    An analytical investigation was made of a transient method of calibrating fine wire thermocouples. The system consisted of a 10 mil diameter standard thermocouple (Pt, Pt-13% Rh) and an 0.8 mil diameter chromel-alumel thermocouple attached to a 20 mil diameter electrically heated platinum wire. The calibration procedure consisted of electrically heating the wire to approximately 2500 F within about a seven-second period in an environment approximating atmospheric conditions at 120,000 feet. Rapid periodic readout of the standard and fine wire thermocouple signals permitted a comparison of the two temperature indications. An analysis was performed which indicated that the temperature distortion at the heater wire produced by the thermocouple junctions appears to be of negligible magnitude. Consequently, the calibration technique appears to be basically sound, although several practical changes which appear desirable are presented and discussed. Additional investigation is warranted to evaluate radiation effects and transient response characteristics.

  9. MS lesion segmentation using a multi-channel patch-based approach with spatial consistency

    NASA Astrophysics Data System (ADS)

    Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit

    2015-03-01

    This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.

  10. Hydrogen-enabled microstructure and fatigue strength engineering of titanium alloys

    NASA Astrophysics Data System (ADS)

    Paramore, James D.; Fang, Zhigang Zak; Dunstan, Matthew; Sun, Pei; Butler, Brady G.

    2017-02-01

    Traditionally, titanium alloys with satisfactory mechanical properties can only be produced via energy-intensive and costly wrought processes, while titanium alloys produced using low-cost powder metallurgy methods consistently result in inferior mechanical properties, especially low fatigue strength. Herein, we demonstrate a new microstructural engineering approach for producing low-cost titanium alloys with exceptional fatigue strength via the hydrogen sintering and phase transformation (HSPT) process. The high fatigue strength presented in this work is achieved by creating wrought-like microstructures without resorting to wrought processing. This is accomplished by generating an ultrafine-grained as-sintered microstructure through hydrogen-enabled phase transformations, facilitating the subsequent creation of fatigue-resistant microstructures via simple heat treatments. The exceptional strength, ductility, and fatigue performance reported in this paper are a breakthrough in the field of low-cost titanium processing.

  11. Hydrogen-enabled microstructure and fatigue strength engineering of titanium alloys

    DOE PAGES

    Paramore, James D.; Fang, Zhigang Zak; Dunstan, Matthew; ...

    2017-02-01

    Traditionally, titanium alloys with satisfactory mechanical properties can only be produced via energy-intensive and costly wrought processes, while titanium alloys produced using low-cost powder metallurgy methods consistently result in inferior mechanical properties, especially low fatigue strength. Herein, we demonstrate a new microstructural engineering approach for producing low-cost titanium alloys with exceptional fatigue strength via the hydrogen sintering and phase transformation (HSPT) process. The high fatigue strength presented in this work is achieved by creating wroughtlike microstructures without resorting to wrought processing. This is accomplished by generating an ultrafine-grained as-sintered microstructure through hydrogen-enabled phase transformations, facilitating the subsequent creation of fatigue-resistantmore » microstructures via simple heat treatments. Finally, the exceptional strength, ductility, and fatigue performance reported in this paper are a breakthrough in the field of low-cost titanium processing.« less

  12. Hydrogen-enabled microstructure and fatigue strength engineering of titanium alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paramore, James D.; Fang, Zhigang Zak; Dunstan, Matthew

    Traditionally, titanium alloys with satisfactory mechanical properties can only be produced via energy-intensive and costly wrought processes, while titanium alloys produced using low-cost powder metallurgy methods consistently result in inferior mechanical properties, especially low fatigue strength. Herein, we demonstrate a new microstructural engineering approach for producing low-cost titanium alloys with exceptional fatigue strength via the hydrogen sintering and phase transformation (HSPT) process. The high fatigue strength presented in this work is achieved by creating wroughtlike microstructures without resorting to wrought processing. This is accomplished by generating an ultrafine-grained as-sintered microstructure through hydrogen-enabled phase transformations, facilitating the subsequent creation of fatigue-resistantmore » microstructures via simple heat treatments. Finally, the exceptional strength, ductility, and fatigue performance reported in this paper are a breakthrough in the field of low-cost titanium processing.« less

  13. Hydrogen-enabled microstructure and fatigue strength engineering of titanium alloys

    PubMed Central

    Paramore, James D.; Fang, Zhigang Zak; Dunstan, Matthew; Sun, Pei; Butler, Brady G.

    2017-01-01

    Traditionally, titanium alloys with satisfactory mechanical properties can only be produced via energy-intensive and costly wrought processes, while titanium alloys produced using low-cost powder metallurgy methods consistently result in inferior mechanical properties, especially low fatigue strength. Herein, we demonstrate a new microstructural engineering approach for producing low-cost titanium alloys with exceptional fatigue strength via the hydrogen sintering and phase transformation (HSPT) process. The high fatigue strength presented in this work is achieved by creating wrought-like microstructures without resorting to wrought processing. This is accomplished by generating an ultrafine-grained as-sintered microstructure through hydrogen-enabled phase transformations, facilitating the subsequent creation of fatigue-resistant microstructures via simple heat treatments. The exceptional strength, ductility, and fatigue performance reported in this paper are a breakthrough in the field of low-cost titanium processing. PMID:28145527

  14. Improved method of producing satisfactory sections of whole eyeball by routine histology.

    PubMed

    Arko-Boham, Benjamin; Ahenkorah, John; Hottor, Bismarck Afedo; Dennis, Esther; Addai, Frederick Kwaku

    2014-02-01

    To overcome the loss of structural integrity when eyeball sections are prepared by wax embedding, we experimentally modified the routine histological procedure and report satisfactorily well-preserved antero-posterior sections of whole eyeballs for teaching/learning purposes. Presently histological sections of whole eyeballs are not readily available because substantial structural distortions attributable to variable consistency of tissue components (and their undesired differential shrinkage) result from routine processing. Notably, at the dehydration stage of processing, the soft, gel-like vitreous humor considerably shrinks relative to the tough fibrous sclera causing collapse of the ocular globe. Additionally, the combined effects of fixation, dehydration, and embedding at 60°C renders the eye lens too hard for microtome slicing at thicknesses suitable for light microscopy. We satisfactorily preserved intact antero-posterior sections of eyeballs via routine paraffin wax processing procedure entailing two main modifications; (i) careful needle aspiration of vitreous humor and replacement with molten wax prior to wax infiltration; (ii) softening of lens in trimmed wax block by placing a drop of concentrated liquid phenol on it for 3 h during microtomy. These variations of the routine histological method produced intact whole eyeball sections with retinal detachment as the only structural distortion. Intact sections of the eyeball obtained compares well with the laborious, expensive, and 8-week long celloidin method. Our method has wider potential usability than costly freeze drying method which requires special skills and equipment (cryotome) and does not produce whole eyeball sections. Copyright © 2013 Wiley Periodicals, Inc.

  15. A new statistical method for characterizing the atmospheres of extrasolar planets

    NASA Astrophysics Data System (ADS)

    Henderson, Cassandra S.; Skemer, Andrew J.; Morley, Caroline V.; Fortney, Jonathan J.

    2017-10-01

    By detecting light from extrasolar planets, we can measure their compositions and bulk physical properties. The technologies used to make these measurements are still in their infancy, and a lack of self-consistency suggests that previous observations have underestimated their systemic errors. We demonstrate a statistical method, newly applied to exoplanet characterization, which uses a Bayesian formalism to account for underestimated errorbars. We use this method to compare photometry of a substellar companion, GJ 758b, with custom atmospheric models. Our method produces a probability distribution of atmospheric model parameters including temperature, gravity, cloud model (fsed) and chemical abundance for GJ 758b. This distribution is less sensitive to highly variant data and appropriately reflects a greater uncertainty on parameter fits.

  16. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  17. Emerging From Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Guo, Chunle

    2018-03-01

    Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water. Such degradation not only affects the quality of underwater images but limits the ability of vision tasks. Different from existing methods which either ignore the wavelength dependency of the attenuation or assume a specific spectral profile, we tackle color distortion problem of underwater image from a new view. In this letter, we propose a weakly supervised color transfer method to correct color distortion, which relaxes the need of paired underwater images for training and allows for the underwater images unknown where were taken. Inspired by Cycle-Consistent Adversarial Networks, we design a multi-term loss function including adversarial loss, cycle consistency loss, and SSIM (Structural Similarity Index Measure) loss, which allows the content and structure of the corrected result the same as the input, but the color as if the image was taken without the water. Experiments on underwater images captured under diverse scenes show that our method produces visually pleasing results, even outperforms the art-of-the-state methods. Besides, our method can improve the performance of vision tasks.

  18. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  19. Lattice Cleaving: A Multimaterial Tetrahedral Meshing Algorithm with Guarantees

    PubMed Central

    Bronson, Jonathan; Levine, Joshua A.; Whitaker, Ross

    2014-01-01

    We introduce a new algorithm for generating tetrahedral meshes that conform to physical boundaries in volumetric domains consisting of multiple materials. The proposed method allows for an arbitrary number of materials, produces high-quality tetrahedral meshes with upper and lower bounds on dihedral angles, and guarantees geometric fidelity. Moreover, the method is combinatoric so its implementation enables rapid mesh construction. These meshes are structured in a way that also allows grading, to reduce element counts in regions of homogeneity. Additionally, we provide proofs showing that both element quality and geometric fidelity are bounded using this approach. PMID:24356365

  20. METHOD OF ISOTOPE CONCENTRATION

    DOEpatents

    Taylor, T.I.; Spindel, W.

    1960-02-01

    A method of concentrating N/sup 15/ in a liquid is described. Gaseous nitric oxide and at least one liquid selected from the group consisting of the aqueous oxyacids and oxides of nitrogen, wherein the atomic ratio of oxygen to nitrogen is greater than unity, are brought into intimate contact to cause an enrichment of the liquid and a depletion of the gas in N/sup 15/. The liquid is, thereafter, reacted with sulfur dioxide to produce a gas contuining nitric oxide. The gas contuining nitric oxide is then continuously passed in countercurrent contact with the liquid to cause further enrichment of the liquid.

  1. Structure of chitosan gels mineralized by sorption

    NASA Astrophysics Data System (ADS)

    Modrzejewska, Z.; Skwarczyńska, A.; Douglas, T. E. L.; Biniaś, D.; Maniukiewicz, W.; Sielski, J.

    2015-10-01

    The paper presents the structural studies of mineralized chitosan hydrogels. Hydrogels produced by using sodium beta-glycerophosphate (Na-β-GP) as a neutralizing agent. Mineralization was performed method "post loading", which consisted in sorption to the gels structure Ca ions. In order to obtain - in the structure of gels - compounds similar to the hydroxyapatites present naturally in bone tissue, gels after sorption were modified in: pH 7 buffer and sodium hydrogen phosphate. In order to determine the structural properties of the gels, the following methods were used: infrared spectroscopy with Fourier transformation, FTIR, X-ray diffractometry, XRD, scanning electron microscopy, SEM.

  2. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  3. Measuring air-water interfacial area for soils using the mass balance surfactant-tracer method.

    PubMed

    Araujo, Juliana B; Mainhagu, Jon; Brusseau, Mark L

    2015-09-01

    There are several methods for conducting interfacial partitioning tracer tests to measure air-water interfacial area in porous media. One such approach is the mass balance surfactant tracer method. An advantage of the mass-balance method compared to other tracer-based methods is that a single test can produce multiple interfacial area measurements over a wide range of water saturations. The mass-balance method has been used to date only for glass beads or treated quartz sand. The purpose of this research is to investigate the effectiveness and implementability of the mass-balance method for application to more complex porous media. The results indicate that interfacial areas measured with the mass-balance method are consistent with values obtained with the miscible-displacement method. This includes results for a soil, for which solid-phase adsorption was a significant component of total tracer retention. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. A facility for investigation of multiple hadrons at cosmic-ray energies

    NASA Technical Reports Server (NTRS)

    Valtonen, E.; Torsti, J. J.; Arvela, H.; Lumme, M.; Nieminen, M.; Peltonen, J.; Vainikka, E.

    1985-01-01

    An experimental arrangement for studying multiple hadrons produced in high-energy hadron-nucleus interactions is under construction at the university of Turku. The method of investigation is based on the detection of hadrons arriving simultaneously at sea level over an area of a few square meters. The apparatus consists of a hadron spectrometer with position-sensitive detectors in connection with a small air shower array. The position resolution using streamer tube detectors will be about 10 mm. Energy spectra of hadrons or groups of simultaneous hadrons produced at primary energies below 10 to the 16th power eV can be measured in the energy range 1 to 2000 GeV.

  5. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View

    PubMed Central

    2016-01-01

    Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644

  6. Maltaricin CPN, a new class IIa bacteriocin produced by Carnobacterium maltaromaticum CPN isolated from mould-ripened cheese.

    PubMed

    Hammi, I; Delalande, F; Belkhou, R; Marchioni, E; Cianferani, S; Ennahar, S

    2016-11-01

    The purpose of this study was to isolate, characterize and determine the structure and the antibacterial activities of a bacteriocin produced by Carnobacterium maltaromaticum CPN, a strain isolated from unpasteurized milk Camembert cheese. This bacteriocin, termed maltaricin CPN, was produced at higher amounts in MRS broth at temperatures between 15°C and 25°C. It was purified to homogeneity from culture supernatant by using a simple method consisting of cation-exchange and reversed-phase chromatographies. Mass spectrometry showed that maltaricin was a 4427·29 Da bacteriocin. Its amino acid sequence was determined by Edman degradation which showed that it had close similarity with bacteriocins of the class IIa. Maltaricin CPN consisted in fact of 44 unmodified amino acids including two cysteine residues at positions 9 and 14 linked by a disulphide bond. The antimicrobial activity of maltaricin CPN covered a range of bacteria, with strong activity against many species of Gram-positive bacteria, especially the food-borne pathogen Listeria monocytogenes, but no activity against Gram-negative ones. In the studied conditions, C. maltaromaticum CPN produced a new class IIa bacteriocin with strong anti-Listeria activity. The study covers the purification and the structural characterization of a new bacteriocin produced by strain C. maltaromaticum CPN isolated from Camembert cheese. Its activity against strains of L. monocytogenes and higher production rates at relatively low temperatures show potential technological applications to improve the safety of refrigerated food. © 2016 The Society for Applied Microbiology.

  7. Calculation of the neutron diffusion equation by using Homotopy Perturbation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koklu, H., E-mail: koklu@gantep.edu.tr; Ozer, O.; Ersoy, A.

    The distribution of the neutrons in a nuclear fuel element in the nuclear reactor core can be calculated by the neutron diffusion theory. It is the basic and the simplest approximation for the neutron flux function in the reactor core. In this study, the neutron flux function is obtained by the Homotopy Perturbation Method (HPM) that is a new and convenient method in recent years. One-group time-independent neutron diffusion equation is examined for the most solved geometrical reactor core of spherical, cubic and cylindrical shapes, in the frame of the HPM. It is observed that the HPM produces excellent resultsmore » consistent with the existing literature.« less

  8. Male production in stingless bees: variable outcomes of queen-worker conflict.

    PubMed

    Tóth, Eva; Strassmann, Joan E; Nogueira-Neto, Paulo; Imperatriz-Fonseca, Vera L; Queller, David C

    2002-12-01

    The genetic structure of social insect colonies is predicted to affect the balance between cooperation and conflict. Stingless bees are of special interest in this respect because they are singly mated relatives of the multiply mated honeybees. Multiple mating is predicted to lead to workers policing each others' male production with the result that virtually all males are produced by the queen, and this prediction is borne out in honey bees. Single mating by the queen, as in stingless bees, causes workers to be more related to each others' sons than to the queen's sons, so they should not police each other. We used microsatellite markers to confirm single mating in eight species of stingless bees and then tested the prediction that workers would produce males. Using a likelihood method, we found some worker male production in six of the eight species, although queens produced some males in all of them. Thus the predicted contrast with honeybees is observed, but not perfectly, perhaps because workers either lack complete control or because of costs of conflict. The data are consistent with the view that there is ongoing conflict over male production. Our method of estimating worker male production appears to be more accurate than exclusion, which sometimes underestimates the proportion of males that are worker produced.

  9. Evidence of validity of the Stress-Producing Life Events (SPLE) instrument.

    PubMed

    Rizzini, Marta; Santos, Alcione Miranda Dos; Silva, Antônio Augusto Moura da

    2018-01-01

    OBJECTIVE Evaluate the construct validity of a list of eight Stressful Life Events in pregnant women. METHODS A cross-sectional study was conducted with 1,446 pregnant women in São Luís, MA, and 1,364 pregnant women in Ribeirão Preto, SP (BRISA cohort), from February 2010 to June 2011. In the exploratory factorial analysis, the promax oblique rotation was used and for the calculation of the internal consistency, we used the compound reliability. The construct validity was determined by means of the confirmatory factorial analysis with the method of estimation of weighted least squares adjusted by the mean and variance. RESULTS The model with the best fit in the exploratory analysis was the one that retained three factors with a cumulative variance of 61.1%. The one-factor model did not obtain a good fit in both samples in the confirmatory analysis. The three-factor model called Stress-Producing Life Events presented a good fit (RMSEA < 0.05; CFI/TLI > 0.90) for both samples. CONCLUSIONS The Stress-Producing Life Events constitute a second order construct with three dimensions related to health, personal and financial aspects and violence. This study found evidence that confirms the construct validity of a list of stressor events, entitled Stress-Producing Life Events Inventory.

  10. Customization of the acoustic field produced by a piezoelectric array through interelement delays

    PubMed Central

    Chitnis, Parag V.; Barbone, Paul E.; Cleveland, Robin O.

    2008-01-01

    A method for producing a prescribed acoustic pressure field from a piezoelectric array was investigated. The array consisted of 170 elements placed on the inner surface of a 15 cm radius spherical cap. Each element was independently driven by using individual pulsers each capable of generating 1.2 kV. Acoustic field customization was achieved by independently controlling the time when each element was excited. The set of time delays necessary to produce a particular acoustic field was determined by using an optimization scheme. The acoustic field at the focal plane was simulated by using the angular spectrum method, and the optimization searched for the time delays that minimized the least squared difference between the magnitudes of the simulated and desired pressure fields. The acoustic field was shaped in two different ways: the −6 dB focal width was increased to different desired widths and the ring-shaped pressure distributions of various prescribed diameters were produced. For both cases, the set of delays resulting from the respective optimization schemes were confirmed to yield the desired pressure distributions by using simulations and measurements. The simulations, however, predicted peak positive pressures roughly half those obtained from the measurements, which was attributed to the exclusion of nonlinearity in the simulations. PMID:18537369

  11. The visible human male: a technical report.

    PubMed Central

    Spitzer, V; Ackerman, M J; Scherzinger, A L; Whitlock, D

    1996-01-01

    The National Library of Medicine's Visible Human Male data set consists of digital magnetic resonance (MR), computed tomography (CT), and anatomic images derived from a single male cadaver. The data set is 15 gigabytes in size and is available from the National Library of Medicine under a no-cost license agreement. The history of the Visible Human Male cadaver and the methods and technology to produce the data set are described. PMID:8653448

  12. Quality Control of Pharmaceuticals

    PubMed Central

    Levi, Leo; Walker, George C.; Pugsley, L. I.

    1964-01-01

    Quality control is an essential operation of the pharmaceutical industry. Drugs must be marketed as safe and therapeutically active formulations whose performance is consistent and predictable. New and better medicinal agents are being produced at an accelerated rate. At the same time more exacting and sophisticated analytical methods are being developed for their evaluation. Requirements governing the quality control of pharmaceuticals in accordance with the Canadian Food and Drugs Act are cited and discussed. PMID:14199105

  13. Process And Apparatus To Accomplish Autothermal Or Steam Reforming Via A Reciprocating Compression Device

    DOEpatents

    Lyons, K. David; James, Robert; Berry, David A.; Gardner, Todd

    2004-09-21

    The invention provides a method and apparatus for producing a synthesis gas from a variety of hydrocarbons. The apparatus (device) consists of a semi-batch, non-constant volume reactor to generate a synthesis gas. While the apparatus feeds mixtures of air, steam, and hydrocarbons into a cylinder where work is performed on the fluid by a piston to adiabatically raise its temperature without heat transfer from an external source.

  14. An EBSD Investigation of Ultrafine-Grain Titanium for Biomedical Applications

    DTIC Science & Technology

    2015-09-21

    angular pressing (ECAP) using a Conform scheme followed by rod drawing. The microstructure was found to be bimodal consisting of relatively coarse...produced for medical implants. The UFG ma- terial was obtained by equal channel angular pressing (ECAP) using a Conform scheme followed by rod drawing...1–6]. The method is based on severe plastic deformation (SPD) and typically includes warm equal-channel angular pressing (ECAP) followed by ether cold

  15. The Effect of Pinyin Input Experience on the Link Between Semantic and Phonology of Chinese Character in Digital Writing.

    PubMed

    Chen, Jingjun; Luo, Rong; Liu, Huashan

    2017-08-01

    With the development of ICT, digital writing is becoming much more common in people's life. Differently from keyboarding alphabets directly to input English words, keyboarding Chinese character is always through typing phonetic alphabets and then identify the glyph provided by Pinyin input-method software while in this process which do not need users to produce orthography spelling, thus it is different from traditional written language production model based on handwriting process. Much of the research in this domain has found that using Pinyin input method is beneficial to Chinese characters recognition, but only a small part explored the effects of individual's Pinyin input experience on the Chinese characters production process. We ask whether using Pinyin input-method will strengthen the semantic-phonology linkage or semantic-orthography linkage in Chinese character mental lexicon. Through recording the RT and accuracy of participants completing semantic-syllable and semantic-glyph consistency judgments, the results found the accuracy of semantic-syllable consistency judgments in high Pinyin input experienced group was higher than that in low-experienced group, and RT was reversed. There were no significant differences on semantic-glyph consistency judgments between the two groups. We conclude that using Pinyin input method in Chinese digital writing can strengthen the semantic-phonology linkage while do not weakening the semantic-orthography linkage in mental lexicon at the same time, which means that Pinyin input method is beneficial to lexical processing involving Chinese cognition.

  16. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    PubMed

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  17. Experimental Study of the Shock Waves Produced by Condenser Discharge in a Gas Tube (thesis); ETUDE EXPERIMENTALE DES ONDES DE CHOC PRODUITES PAR DECHARGES D'UN CONDENSATEUR DANS UN TUBE A GAZ (thesis)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Der Agobian, R.

    1964-10-31

    The shock waves Droduced by condenser discharge in a gas tube were investigated. The study was limited to wave velocities less than five times the speed of sound, propagated in gas at low pressure (several mm Hg). A method was designed and perfected for the detection of the shock waves that are insufficiently rapid to produce gas ionization. This method consisted of the creation of an autonomous plasma, before the arrival of the wave, which was then modified by the wave passage. two methods were used for the detection of phenomena accompanying the passage of the shock waves, an opticalmore » method and a radioelectric method. The qualitative study of the modifications produced on the wave passage showed the remarkable correlation existing between the results obtained by the two methods. The experimental results on the propagation laws for shock waves in a low-diameter tube agreed with theory. The variations of the coefficient oi recombination were determined as a iunction of the electron temperature, and the results were in good agreement with the Bates theory. It was shown that the electron gas of the plasma had the same increase of density as a neutral gas during the passage of a shock wave. The variations of the frequency of electron collisions on passage of the shock wave could be explained by considering the electron--ion collisions with respect to electron-- atom collisions. (J.S.R.)« less

  18. Achieving Consistent Multiple Daily Low-Dose Bacillus anthracis Spore Inhalation Exposures in the Rabbit Model

    PubMed Central

    Barnewall, Roy E.; Comer, Jason E.; Miller, Brian D.; Gutting, Bradford W.; Wolfe, Daniel N.; Director-Myska, Alison E.; Nichols, Tonya L.; Taft, Sarah C.

    2012-01-01

    Repeated low-level exposures to biological agents could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as B. anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU) of B. anthracis spores) and included a pilot feasibility characterization study, acute exposure study, and a multiple 15 day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses to rabbits. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 × 102, 1 × 103, 1 × 104, and 1 × 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 × 102, 1 × 103, and 1 × 104 CFU. In all studies, targeted inhaled doses remained consistent from rabbit-to-rabbit and day-to-day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and over multiple exposure days. PMID:22919662

  19. Landsat Image Map Production Methods at the U. S. Geological Survey

    USGS Publications Warehouse

    Kidwell, R.D.; Binnie, D.R.; Martin, S.

    1987-01-01

    To maintain consistently high quality in satellite image map production, the U. S. Geological Survey (USGS) has developed standard procedures for the photographic and digital production of Landsat image mosaics, and for lithographic printing of multispectral imagery. This paper gives a brief review of the photographic, digital, and lithographic procedures currently in use for producing image maps from Landsat data. It is shown that consistency in the printing of image maps is achieved by standardizing the materials and procedures that affect the image detail and color balance of the final product. Densitometric standards are established by printing control targets using the pressplates, inks, pre-press proofs, and paper to be used for printing.

  20. Process for the regeneration of metallic catalysts

    DOEpatents

    Katzer, James R.; Windawi, Hassan

    1981-01-01

    A method for the regeneration of metallic hydrogenation catalysts from the class consisting of Ni, Rh, Pd, Ir, Pt and Ru poisoned with sulfur, with or without accompanying carbon deposition, comprising subjecting the catalyst to exposure to oxygen gas in a concentration of about 1-10 ppm. intermixed with an inert gas of the group consisting of He, A, Xe, Kr, N.sub.2 and air substantially free of oxygen to an extent such that the total oxygen molecule throughout is in the range of about 10 to 20 times that of the hydrogen sulfide molecular exposure producing the catalyst poisoning while maintaining the temperature in the range of about 300.degree. to 500.degree. C.

  1. Adaptive Self Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Matthew; Draelos, Timothy; Knox, Hunter

    2017-05-02

    The AST software includes numeric methods to 1) adjust STA/LTA signal detector trigger level (TL) values and 2) filter detections for a network of sensors. AST adapts TL values to the current state of the environment by leveraging cooperation within a neighborhood of sensors. The key metric that guides the dynamic tuning is consistency of each sensor with its nearest neighbors: TL values are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The AST algorithm adapts in near real-time to changing conditions in an attempt tomore » automatically self-tune a signal detector to identify (detect) only signals from events of interest.« less

  2. Asynchronous oscillations of rigid rods drive viscous fluid to swirl

    NASA Astrophysics Data System (ADS)

    Hayashi, Rintaro; Takagi, Daisuke

    2017-12-01

    We present a minimal system for generating flow at low Reynolds number by oscillating a pair of rigid rods in silicone oil. Experiments show that oscillating them in phase produces no net flow, but a phase difference alone can generate rich flow fields. Tracer particles follow complex trajectory patterns consisting of small orbital movements every cycle and then drifting or swirling in larger regions after many cycles. Observations are consistent with simulations performed using the method of regularized Stokeslets, which reveal complex three-dimensional flow structures emerging from simple oscillatory actuation. Our findings reveal the basic underlying flow structure around oscillatory protrusions such as hairs and legs as commonly featured on living and nonliving bodies.

  3. Preparation and composition of superconducting copper oxides based on Ga-O layers

    DOEpatents

    Dabrowski, Bogdan; Vaughey, J. T.; Poeppelmeier, Kenneth R.

    1994-01-01

    A high temperature superconducting material with the general formula GaSr.sub.2 Ln.sub.1-x MxCu.sub.2 O.sub.7.+-.w wherein Ln is selected from the group consisting of La, Ce, Pt, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb and Y and M is selected from the group consisting of Ca and Sr, 0.2.ltoreq.x.ltoreq.0.4 and w is a small fraction of one. A method of preparing this high temperature superconducting material is provided which includes heating and cooling a mixture to produce a crystalline material which is subsequently fired, ground and annealed at high pressure and temperature in oxygen to establish superconductivity.

  4. The Chameleon Effect: Characterization Challenges Due to the Variability of Nanoparticles and Their Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Donald R.

    Nanoparticles in a variety of forms are of increasing importance in fundamental research, technological and medical applications, and environmental or toxicology studies. Physical and chemical drivers that lead to multiple types of particle instabilities complicate both the ability to produce and consistently deliver well defined particles and their appropriate characterization, frequently leading to inconsistencies and conflicts in the published literature. This perspective suggests that provenance information, beyond that often recorded or reported, and application of a set of core characterization methods, including a surface sensitive technique, consistently applied at critical times can serve as tools in the effort minimize reproducibilitymore » issues.« less

  5. Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Moses, Gregory

    2017-10-01

    The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  6. Production and characterization of europium doped sol-gel yttrium oxide

    NASA Astrophysics Data System (ADS)

    Krebs, J. K.; Hobson, Christopher; Silversmith, Ann

    2004-03-01

    Sol-gel produced materials have recently gained attention for their use in producing nanoscale dielectric materials for confinement studies. Lanthanide impurities in the dielectric enable experimenters to optically probe the structure and dynamic properties of the nanoparticle hosts. We report on an alkoxide sol-gel production method used to produce trivalent europium doped yttrium oxide. Our process follows the standard hydrolysis of an alkoxide precursor with water containing the lanthanide ions. The sol is then aged and calcined at 800 ^oC to produce the powder samples. X-ray diffraction confirms the structure of the powder is that of Y_2O_3. The emission and excitation of the europium impurities is consistent with that of europium doped single crystal yttrium oxide, where it is known that the europium ions substitute for yttrium in the lattice. We therefore conclude that the sol-gel process enables the incorporation of europium ions into the yttrium oxide structure at temperatures far below the melting temperature. The results of preliminary dynamics measurements will also be discussed.

  7. Fabrication of stainless steel clad tubing. [gas pressure bonding

    NASA Technical Reports Server (NTRS)

    Kovach, C. W.

    1978-01-01

    The feasibility of producing stainless steel clad carbon steel tubing by a gas pressure bonding process was evaluated. Such a tube product could provide substantial chromium savings over monolithic stainless tubing in the event of a serious chromium shortage. The process consists of the initial assembly of three component tubesets from conventionally produced tubing, the formation of a strong metallurgical bond between the three components by gas pressure bonding, and conventional cold draw and anneal processing to final size. The quality of the tubes produced was excellent from the standpoint of bond strength, mechanical, and forming properties. The only significant quality problem encountered was carburization of the stainless clad by the carbon steel core which can be overcome by further refinement through at least three different approaches. The estimated cost of clad tubing produced by this process is greater than that for monolithic stainless tubing, but not so high as to make the process impractical as a chromium conservation method.

  8. Kidney cell electrophoresis in space flight: Rationale, methods, results and flow cytometry applications

    NASA Technical Reports Server (NTRS)

    Todd, P.; Morrison, Dennis R.; Barlow, Grant H.; Lewis, Marian L.; Lanham, J. W.; Cleveland, C.; Williams, K.; Kunze, M. E.; Goolsby, C. L.

    1988-01-01

    Cultures of human embryonic kidney cells consistently contain an electrophoretically separable subpopulation of cells that produce high levels of urokinase and have an electrophoretic mobility about 85 percent as high as that of the most mobile human embryonic kidney cells. This subpopulation is rich in large epithelioid cells that have relatively little internal structure. When resolution and throughput are adequate, free fluid electrophoresis can be used to isolate a broad band of low mobility cells which also produces high levels of plasminogen activators (PAs). In the course of performing this, it was discovered that all electrophoretic subpopulations of cultured human embryonic kidney cells produce some PAs and that separate subpopulations produce high quantities of different types of PA's. This information and the development of sensitive assays for this project have provided new insights into cell secretion mechanisms related to fibrinolysis. These advances would probably not have been made without the NASA program to explore fundamental questions of free fluid electrophoresis in space.

  9. GPR Imaging for Deeply Buried Objects: A Comparative Study Based on FDTD Models and Field Experiments

    NASA Technical Reports Server (NTRS)

    Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid

    2012-01-01

    Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.

  10. Integrated use of surface geophysical methods for site characterization — A case study in North Kingstown, Rhode Island

    USGS Publications Warehouse

    Johnson, Carole D.; Lane, John W.; Brandon, William C.; Williams, Christine A.P.; White, Eric A.

    2010-01-01

    A suite of complementary, non‐invasive surface geophysical methods was used to assess their utility for site characterization in a pilot investigation at a former defense site in North Kingstown, Rhode Island. The methods included frequency‐domain electromagnetics (FDEM), ground‐penetrating radar (GPR), electrical resistivity tomography (ERT), and multi‐channel analysis of surface‐wave (MASW) seismic. The results of each method were compared to each other and to drive‐point data from the site. FDEM was used as a reconnaissance method to assess buried utilities and anthropogenic structures; to identify near‐surface changes in water chemistry related to conductive leachate from road‐salt storage; and to investigate a resistive signature possibly caused by groundwater discharge. Shallow anomalies observed in the GPR and ERT data were caused by near‐surface infrastructure and were consistent with anomalies observed in the FDEM data. Several parabolic reflectors were observed in the upper part of the GPR profiles, and a fairly continuous reflector that was interpreted as bedrock could be traced across the lower part of the profiles. MASW seismic data showed a sharp break in shear wave velocity at depth, which was interpreted as the overburden/bedrock interface. The MASW profile indicates the presence of a trough in the bedrock surface in the same location where the ERT data indicate lateral variations in resistivity. Depths to bedrock interpreted from the ERT, MASW, and GPR profiles were similar and consistent with the depths of refusal identified in the direct‐push wells. The interpretations of data collected using the individual methods yielded non‐unique solutions with considerable uncertainty. Integrated interpretation of the electrical, electromagnetic, and seismic geophysical profiles produced a more consistent and unique estimation of depth to bedrock that is consistent with ground‐truth data at the site. This test case shows that using complementary techniques that measure different properties can be more effective for site characterization than a single‐method investigation.

  11. Molecular and functional assessment of multicellular cancer spheroids produced in double emulsions enabled by efficient airway resistance based selective surface treatment

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Leth Jepsen, Morten; Ivarsen, Anne Kathrine R.; Knudsen, Birgitta R.; Ho, Yi-Ping

    2017-09-01

    Multicellular spheroids have garnered significant attention as an in vitro three-dimensional cancer model which can mimick the in vivo microenvironmental features. While microfluidics generated double emulsions have become a potential method to generate spheroids, challenges remain on the tedious procedures. Enabled by a novel ‘airway resistance’ based selective surface treatment, this study presents an easy and facile generation of double emulsions for the initiation and cultivation of multicellular spheroids in a scaffold-free format. Combining with our previously developed DNA nanosensors, intestinal spheroids produced in the double emulsions have shown an elevated activities of an essential DNA modifying enzyme, the topoisomerase I. The observed molecular and functional characteristics of spheroids produced in double emulsions are similar to the counterparts produced by the commercially available ultra-low attachment plates. However, the double emulsions excel for their improved uniformity, and the consistency of the results obtained by subsequent analysis of the spheroids. The presented technique is expected to ease the burden of producing spheroids and to promote the spheroids model for cancer or stem cell study.

  12. House hold unit for the treatment of fluoride, iron, arsenic and microorganism contaminated drinking water.

    PubMed

    Dhadge, Vijaykumar L; Medhi, Chitta Ranjan; Changmai, Murchana; Purkait, Mihir Kumar

    2018-05-01

    A first of its kind hybrid electrocoagulation-filtration prototype unit was fabricated for the removal of fluoride, iron, arsenic and microorganisms contaminated drinking water. The unit comprised of 3 chambers, chamber A consisting of an inlet for the water to be treated and an outlet for the treated water along with one block of aluminum electrodes. Chamber B consisted of ceramic membrane filtration assembly at the bottom over a metallic support which filters the flocs so produced in chamber A and chamber C consisting of space to collect the treated water. Operating parameters were maintained as current density of 625 A m -2 and an electrode distance of 0.005 m. Contaminated drinking water containing mixture of fluoride (10 mg L -1 ), iron (25 mg L -1 ), arsenic (200 μg L -1 ) and microorganisms (35 CFU ml -1 ) was used for the experiment. A removal of 98.74%, 95.65%, 93.2% and 100% were obtained for iron, arsenic, fluoride and microorganisms, respectively. The apparatus and method made it possible to efficiently treat contaminated drinking water to produce drinkable water as per WHO specification. By-products obtained from the electrocoagulation bath were analyzed using SEM, EDX and XRD and explained. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Modelling the penumbra in Computed Tomography1

    PubMed Central

    Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.

    2016-01-01

    BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198

  14. Evaluation of a polymerase chain reaction-based system for detection of Salmonella enteritidis, Escherichia coli O157:H7, Listeria spp., and Listeria monocytogenes on fresh fruits and vegetables.

    PubMed

    Shearer, A E; Strapp, C M; Joerger, R D

    2001-06-01

    A polymerase chain reaction (PCR)-based detection system, BAX, was evaluated for its sensitivity in detecting Salmonella Enteritidis, Escherichia coli O157:H7, Listeria sp., and Listeria monocytogenes on fresh produce. Fifteen different types of produce (alfalfa sprouts, green peppers, parsley, white cabbage, radishes, onions, carrots, mushrooms, leaf lettuce, tomatoes, strawberries, cantaloupe, mango, apples, and oranges) were inoculated, in separate studies, with Salmonella Enteritidis, E. coli O157:H7, and L. monocytogenes down to the predicted level of 1 CFU per 25-g sample. Detection by BAX was compared to recovery of the inoculated bacteria by culture methods according to the Food and Drug Administration's (FDA) Bacteriological Analytical Manual (BAM). BAX was essentially as sensitive as the culture-based method in detecting Salmonella Enteritidis and L. monocytogenes and more sensitive than the culture-based method for the detection of E. coli O157:H7 on green pepper, carrot, radish, and sprout samples. Detection of the pathogenic bacteria in samples spiked with a predicted number of less than 10 CFU was possible for most produce samples, but both methods failed to detect L. monocytogenes on carrot samples and one of two mushroom and onion samples spiked with less than 100 CFU. Both BAX and the culture method were also unable to consistently recover low numbers of E. coli O157:H7 from alfalfa sprouts. The PCR method allowed detection of Salmonella Enteritidis, E. coli O157:H7, and L. monocytogenes at least 2 days earlier than the conventional culture methods.

  15. Characterization of Tubing from Advanced ODS alloy (FCRD-NFA1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maloy, Stuart Andrew; Aydogan, Eda; Anderoglu, Osman

    2016-09-20

    Fabrication methods are being developed and tested for producing fuel clad tubing of the advanced ODS 14YWT and FCRD-NFA1 ferritic alloys. Three fabrication methods were based on plastically deforming a machined thick-wall tube sample of the ODS alloys by pilgering, hydrostatic extrusion or drawing to decrease the outer diameter and wall thickness and increase the length of the final tube. The fourth fabrication method consisted of the additive manufacturing approach involving solid-state spray deposition (SSSD) of ball milled and annealed powder of 14YWT for producing thin-wall tubes. Of the four fabrication methods, two methods were successful at producing tubing formore » further characterization: production of tubing by high-velocity oxy-fuel spray forming and production of tubing using high-temperature hydrostatic extrusion. The characterization described shows through neutron diffraction the texture produced during extrusion while maintaining the beneficial oxide dispersion. In this research, the parameters for innovative thermal spray deposition and hot extrusion processing methods have been developed to produce the final nanostructured ferritic alloy (NFA) tubes having approximately 0.5 mm wall thickness. Effect of different processing routes on texture and grain boundary characteristics has been investigated. It was found that hydrostatic extrusion results in combination of plane strain and shear deformations which generate rolling textures of α- and γ-fibers on {001}<110> and {111}<110> together with a shear texture of ζ-fiber on {011}<211> and {011}<011>. On the other hand, multi-step plane strain deformation in cross directions leads to a strong rolling textures of θ- and ε-fiber on {001}<110> together with weak γ-fiber on {111}<112>. Even though the amount of the equivalent strain is similar, shear deformation leads to much lower texture indexes compared to the plane strain deformations. Moreover, while 50% of hot rolling brings about a large number of high-angle grain boundaries (HAB), 44% of shear deformation results in large amount of low-angle boundaries (LAB) showing the incomplete recrystallization.« less

  16. An Evaluation of Two Methods for Generating Synthetic HL7 Segments Reflecting Real-World Health Information Exchange Transactions

    PubMed Central

    Mwogi, Thomas S.; Biondich, Paul G.; Grannis, Shaun J.

    2014-01-01

    Motivated by the need for readily available data for testing an open-source health information exchange platform, we developed and evaluated two methods for generating synthetic messages. The methods used HL7 version 2 messages obtained from the Indiana Network for Patient Care. Data from both methods were analyzed to assess how effectively the output reflected original ‘real-world’ data. The Markov Chain method (MCM) used an algorithm based on transitional probability matrix while the Music Box model (MBM) randomly selected messages of particular trigger type from the original data to generate new messages. The MBM was faster, generated shorter messages and exhibited less variation in message length. The MCM required more computational power, generated longer messages with more message length variability. Both methods exhibited adequate coverage, producing a high proportion of messages consistent with original messages. Both methods yielded similar rates of valid messages. PMID:25954458

  17. The learning of aquaponics practice in university

    NASA Astrophysics Data System (ADS)

    Agustina, T. W.; Rustaman, N. Y.; Riandi; Purwianingsih, W.

    2018-05-01

    This study aims to obtain a description of the perfomance capabilities of aquaponic technology and the assessment of product and packaging of harvest kale. The aquaponic practice used a STREAM (Science Technology Religion Art Matematics) approach. The method was explanatory sequential mixed method. The research was conducted on one class of Biology Education students in 6th semester. The sample was chosen purposively with 49 students. The study instruments are student worksheet, observation sheet, rubric performance and product assessment, interview sheet and field notes. The indicator of performance rubrics on the manufacture of aquaponic technology consisted of the product rubric, cultivation criteria and packing method of kale. The interview rubric is in the form of student constraints on the manufacture of aquaponics. Based on the results, most students have performance in designing technology that is categorized as enough up to good. Almost all students produce a very good kale harvest. Most of the students produce kale packaging products that are categorized as enough. The implications of this research are the learning of aquaponic with the STREAM approach can equip student’s performance and product capabilities.

  18. REDUCING AMBIGUITY IN THE FUNCTIONAL ASSESSMENT OF PROBLEM BEHAVIOR

    PubMed Central

    Rooker, Griffin W.; DeLeon, Iser G.; Borrero, Carrie S. W.; Frank-Crawford, Michelle A.; Roscoe, Eileen M.

    2015-01-01

    Severe problem behavior (e.g., self-injury and aggression) remains among the most serious challenges for the habilitation of persons with intellectual disabilities and is a significant obstacle to community integration. The current standard of behavior analytic treatment for problem behavior in this population consists of a functional assessment and treatment model. Within that model, the first step is to assess the behavior–environment relations that give rise to and maintain problem behavior, a functional behavioral assessment. Conventional methods of assessing behavioral function include indirect, descriptive, and experimental assessments of problem behavior. Clinical investigators have produced a rich literature demonstrating the relative effectiveness for each method, but in clinical practice, each can produce ambiguous or difficult-to-interpret outcomes that may impede treatment development. This paper outlines potential sources of variability in assessment outcomes and then reviews the evidence on strategies for avoiding ambiguous outcomes and/or clarifying initially ambiguous results. The end result for each assessment method is a set of best practice guidelines, given the available evidence, for conducting the initial assessment. PMID:26236145

  19. Measurement method of magnetic field for the wire suspended micro-pendulum accelerometer.

    PubMed

    Lu, Yongle; Li, Leilei; Hu, Ning; Pan, Yingjun; Ren, Chunhua

    2015-04-13

    Force producer is one of the core components of a Wire Suspended Micro-Pendulum Accelerometer; and the stability of permanent magnet in the force producer determines the consistency of the acceleration sensor's scale factor. For an assembled accelerometer; direct measurement of magnetic field strength is not a feasible option; as the magnetometer probe cannot be laid inside the micro-space of the sensor. This paper proposed an indirect measurement method of the remnant magnetization of Micro-Pendulum Accelerometer. The measurement is based on the working principle of the accelerometer; using the current output at several different scenarios to resolve the remnant magnetization of the permanent magnet. Iterative Least Squares algorithm was used for the adjustment of the data due to nonlinearity of this problem. The calculated remnant magnetization was 1.035 T. Compared to the true value; the error was less than 0.001 T. The proposed method provides an effective theoretical guidance for measuring the magnetic field of the Wire Suspended Micro-Pendulum Accelerometer; correcting the scale factor and temperature influence coefficients; etc.

  20. Large size three-dimensional video by electronic holography using multiple spatial light modulators

    PubMed Central

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-01-01

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees. PMID:25146685

  1. Large size three-dimensional video by electronic holography using multiple spatial light modulators.

    PubMed

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-08-22

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.

  2. Equivalence of Laptop and Tablet Administrations of the Minnesota Multiphasic Personality Inventory-2 Restructured Form.

    PubMed

    Menton, William H; Crighton, Adam H; Tarescavage, Anthony M; Marek, Ryan J; Hicks, Adam D; Ben-Porath, Yossef S

    2017-06-01

    The present study investigated the comparability of laptop computer- and tablet-based administration modes for the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Employing a counterbalanced within-subjects design, the MMPI-2-RF was administered via both modes to a sample of college undergraduates ( N = 133). Administration modes were compared in terms of mean scale scores, internal consistency, test-retest consistency, external validity, and administration time. Mean scores were generally similar, and scores produced via both methods appeared approximately equal in terms of internal consistency and test-retest consistency. Scores from the two modalities also evidenced highly similar patterns of associations with external criteria. Notably, tablet administration of the MMPI-2-RF was substantially longer than laptop administration in the present study (mean difference 7.2 minutes, Cohen's d = .95). Overall, results suggest that varying administration mode between laptop and tablet has a negligible influence on MMPI-2-RF scores, providing evidence that these modes of administration can be considered psychometrically equivalent.

  3. Physical-chemical evaluation of hydraulic fracturing chemicals in the context of produced water treatment.

    PubMed

    Camarillo, Mary Kay; Domen, Jeremy K; Stringfellow, William T

    2016-12-01

    Produced water is a significant waste stream that can be treated and reused; however, the removal of production chemicals-such as those added in hydraulic fracturing-must be addressed. One motivation for treating and reusing produced water is that current disposal methods-typically consisting of deep well injection and percolation in infiltration pits-are being limited. Furthermore, oil and gas production often occurs in arid regions where there is demand for new water sources. In this paper, hydraulic fracturing chemical additive data from California are used as a case study where physical-chemical and biodegradation data are summarized and used to screen for appropriate produced water treatment technologies. The data indicate that hydraulic fracturing chemicals are largely treatable; however, data are missing for 24 of the 193 chemical additives identified. More than one-third of organic chemicals have data indicating biodegradability, suggesting biological treatment would be effective. Adsorption-based methods and partitioning of chemicals into oil for subsequent separation is expected to be effective for approximately one-third of chemicals. Volatilization-based treatment methods (e.g. air stripping) will only be effective for approximately 10% of chemicals. Reverse osmosis is a good catch-all with over 70% of organic chemicals expected to be removed efficiently. Other technologies such as electrocoagulation and advanced oxidation are promising but lack demonstration. Chemicals of most concern due to prevalence, toxicity, and lack of data include propargyl alcohol, 2-mercaptoethyl alcohol, tetrakis hydroxymethyl-phosphonium sulfate, thioglycolic acid, 2-bromo-3-nitrilopropionamide, formaldehyde polymers, polymers of acrylic acid, quaternary ammonium compounds, and surfactants (e.g. ethoxylated alcohols). Future studies should examine the fate of hydraulic fracturing chemicals in produced water treatment trains to demonstrate removal and clarify interactions between upstream and downstream processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Microfluidic model of the platelet-generating organ: beyond bone marrow biomimetics

    NASA Astrophysics Data System (ADS)

    Reyssat, Mathilde; Blin, Antoine; Le Goff, Anne; Magniez, Aurelie; Poirault-Chassac, Sonia; Teste, Bruno; Sicot, Geraldine; Nguyen, Kim Anh; Hamdi, Feriel S.; Baruch, Dominique

    2015-11-01

    We present a new, rapid method for producing blood platelets in vitro from cultured megakaryocytes based on a microfluidic device. This device consists in a wide array of VWF coated micropillars. Such pillars act as anchors on megakaryocytes, allowing them to remain trapped in the device and subjected to hydrodynamic shear. The combined effect of anchoring and shear induces the elongation of megakaryocytes and finally their rupture into platelets and proplatelets. This process was observed with megakaryocytes from different origins and found to be robust. This original bioreactor design allows to process megakaryocytes at high throughput (millions per hour), with a platelet yield increasing four times in comparison with control experiments. Since platelets are produced in such a large amount, their extensive biological characterization is possible. Fluorescent microscopy observations, flow cytometry, aggregometry results indicate that platelets produced in this bioreactor are functional.

  5. Electroformation of Janus and patchy capsules

    NASA Astrophysics Data System (ADS)

    Rozynek, Zbigniew; Mikkelsen, Alexander; Dommersnes, Paul; Fossum, Jon Otto

    2014-05-01

    Janus and patchy particles have designed heterogeneous surfaces that consist of two or several patches with different materials properties. These particles are emerging as building blocks for a new class of soft matter and functional materials. Here we introduce a route for forming heterogeneous capsules by producing highly ordered jammed colloidal shells of various shapes with domains of controlled size and composition. These structures combine the functionalities offered by Janus or patchy particles, and those given by permeable shells such as colloidosomes. The simple assembly route involves the synergetic action of electro-hydrodynamic flow and electro-coalescence. We demonstrate that the method is robust and straightforwardly extendable to production of multi-patchy capsules. This forms a starting point for producing patchy colloidosomes with domains of anisotropic chemical surface properties, permeability or mixed liquid-solid phase domains, which could be exploited to produce functional emulsions, light and hollow supra-colloidosome structures, or scaffolds.

  6. Microflora distributions in paleosols: a method for calculating the validity of radiocarbon-dated surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahaney, W.C.; Boyer, M.G.

    1986-08-01

    Microflora (bacteria and fungi) distributions in several paleosols from Mount Kenya, East Africa, provide important information about contamination of buried soil horizons dated by radiocarbon. High counts of bacteria and fungi in buried soils provide evidence for contamination by plant root effects or ground water movement. Profiles with decreasing counts versus depth appear to produce internally consistent and accurate radiocarbon dates. Profiles with disjunct or bimodal distributions of microflora at various depths produce internally inconsistent chronological sequences of radiocarbon-dated buried surfaces. Preliminary results suggest that numbers up to 5 x 10/sup 2/ g/sup -1/ for bacteria in buried A horizonsmore » do not appear to affect the validity of /sup 14/C dates. Beyond this threshold value, contamination appears to produce younger dates, the difference between true age and /sup 14/C age increasing with the amount of microflora contamination.« less

  7. Development of a robust chromatographic method for the detection of chlorophenols in cork oak forest soils.

    PubMed

    McLellan, Iain; Hursthouse, Andrew; Morrison, Calum; Varela, Adélia; Pereira, Cristina Silva

    2014-02-01

    A major concern for the cork and wine industry is 'cork taint' which is associated with chloroanisoles, the microbial degradation metabolites of chlorophenols. The use of chlorophenolic compounds as pesticides within cork forests was prohibited in 1993 in the European Union (EU) following the introduction of industry guidance. However, cork produced outside the EU is still thought to be affected and simple, robust methods for chlorophenol analysis are required for wider environmental assessment by industry and local environmental regulators. Soil samples were collected from three common-use forests in Tunisia and from one privately owned forest in Sardinia, providing examples of varied management practice and degree of human intervention. These provided challenge samples for the optimisation of a HPLC-UV detection method. It produced recoveries consistently >75% against a soil CRM (ERM-CC008) for pentachlorophenol. The optimised method, with ultraviolet (diode array) detection is able to separate and quantify 16 different chlorophenols at field concentrations greater than the limits of detection ranging from 6.5 to 191.3 μg/kg (dry weight). Application to a range of field samples demonstrated the absence of widespread contamination in forest soils at sites sampled in Sardinia and Tunisia.

  8. Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.

    2017-10-01

    Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.

  9. Aerosol synthesis of nano and micro-scale zero valent metal particles from oxide precursors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Jonathan; Luhrs, Claudia; Lesman, Zayd

    2010-01-01

    In this work a novel aerosol method, derived form the batch Reduction/Expansion Synthesis (RES) method, for production of nano / micro-scale metal particles from oxides and hydroxides is presented. In the Aerosol-RES (A-RES) method, an aerosol, consisting of a physical mixture of urea and metal oxide or hydroxides, is passed through a heated oven (1000 C) with a residence time of the order of 1 second, producing pure (zero valent) metal particles. It appears that the process is flexible regarding metal or alloy identity, allows control of particle size and can be readily scaled to very large throughput. Current workmore » is focused on creating nanoparticles of metal and metal alloy using this method. Although this is primarily a report on observations, some key elements of the chemistry are clear. In particular, the reducing species produced by urea decomposition are the primary agents responsible for reduction of oxides and hydroxides to metal. It is also likely that the rapid expansion that takes place when solid/liquid urea decomposes to form gas species influences the final morphology of the particles.« less

  10. Incorporation of catalytic dehydrogenation into Fischer-Tropsch synthesis to lower carbon dioxide emissions

    DOEpatents

    Huffman, Gerald P

    2012-09-18

    A method for producing liquid fuels includes the steps of gasifying a starting material selected from a group consisting of coal, biomass, carbon nanotubes and mixtures thereof to produce a syngas, subjecting that syngas to Fischer-Tropsch synthesis (FTS) to produce a hyrdrocarbon product stream, separating that hydrocarbon product stream into C1-C4 hydrocarbons and C5+ hydrocarbons to be used as liquid fuels and subjecting the C1-C4 hydrocarbons to catalytic dehydrogenation (CDH) to produce hydrogen and carbon nanotubes. The hydrogen produced by CDH is recycled to be mixed with the syngas incident to the FTS reactor in order to raise the hydrogen to carbon monoxide ratio of the syngas to values of 2 or higher, which is required to produce liquid hydrocarbon fuels. This is accomplished with little or no production of carbon dioxide, a greenhouse gas. The carbon is captured in the form of a potentially valuable by-product, multi-walled carbon nanotubes (MWNT), while huge emissions of carbon dioxide are avoided and very large quantities of water employed for the water-gas shift in traditional FTS systems are saved.

  11. Analysing the 21 cm signal from the epoch of reionization with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; Semelin, Benoit

    2017-07-01

    The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.

  12. The separate universe approach to soft limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenton, Zachary; Mulryne, David J., E-mail: z.a.kenton@qmul.ac.uk, E-mail: d.mulryne@qmul.ac.uk

    We develop a formalism for calculating soft limits of n -point inflationary correlation functions using separate universe techniques. Our method naturally allows for multiple fields and leads to an elegant diagrammatic approach. As an application we focus on the trispectrum produced by inflation with multiple light fields, giving explicit formulae for all possible single- and double-soft limits. We also investigate consistency relations and present an infinite tower of inequalities between soft correlation functions which generalise the Suyama-Yamaguchi inequality.

  13. Cavitation Enhancing Nanodroplets Mediate Efficient DNA Fragmentation in a Bench Top Ultrasonic Water Bath

    PubMed Central

    Malc, Ewa P.; Jayakody, Chatura N.; Tsuruta, James K.; Mieczkowski, Piotr A.; Janzen, William P.; Dayton, Paul A.

    2015-01-01

    A perfluorocarbon nanodroplet formulation is shown to be an effective cavitation enhancement agent, enabling rapid and consistent fragmentation of genomic DNA in a standard ultrasonic water bath. This nanodroplet-enhanced method produces genomic DNA libraries and next-generation sequencing results indistinguishable from DNA samples fragmented in dedicated commercial acoustic sonication equipment, and with higher throughput. This technique thus enables widespread access to fast bench-top genomic DNA fragmentation. PMID:26186461

  14. Comparison of Pixel-Based and Object-Based Classification Using Parameters and Non-Parameters Approach for the Pattern Consistency of Multi Scale Landcover

    NASA Astrophysics Data System (ADS)

    Juniati, E.; Arrofiqoh, E. N.

    2017-09-01

    Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.

  15. Rapid, low-cost photogrammetry to monitor volcanic eruptions: an example from Mount St. Helens, Washington, USA

    USGS Publications Warehouse

    Diefenbach, Angela K.; Crider, Juliet G.; Schilling, Steve P.; Dzurisin, Daniel

    2012-01-01

    We describe a low-cost application of digital photogrammetry using commercially available photogrammetric software and oblique photographs taken with an off-the-shelf digital camera to create sequential digital elevation models (DEMs) of a lava dome that grew during the 2004–2008 eruption of Mount St. Helens (MSH) volcano. Renewed activity at MSH provided an opportunity to devise and test this method, because it could be validated against other observations of this well-monitored volcano. The datasets consist of oblique aerial photographs (snapshots) taken from a helicopter using a digital single-lens reflex camera. Twelve sets of overlapping digital images of the dome taken during 2004–2007 were used to produce DEMs and to calculate lava dome volumes and extrusion rates. Analyses of the digital images were carried out using photogrammetric software to produce three-dimensional coordinates of points identified in multiple photos. The evolving morphology of the dome was modeled by comparing successive DEMs. Results were validated by comparison to volume measurements derived from traditional vertical photogrammetric surveys by the US Geological Survey Cascades Volcano Observatory. Our technique was significantly less expensive and required less time than traditional vertical photogrammetric techniques; yet, it consistently yielded volume estimates within 5% of the traditional method. This technique provides an inexpensive, rapid assessment tool for tracking lava dome growth or other topographic changes at restless volcanoes.

  16. A New Shape Description Method Using Angular Radial Transform

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Min; Kim, Whoi-Yul

    Shape is one of the primary low-level image features in content-based image retrieval. In this paper we propose a new shape description method that consists of a rotationally invariant angular radial transform descriptor (IARTD). The IARTD is a feature vector that combines the magnitude and aligned phases of the angular radial transform (ART) coefficients. A phase correction scheme is employed to produce the aligned phase so that the IARTD is invariant to rotation. The distance between two IARTDs is defined by combining differences in the magnitudes and aligned phases. In an experiment using the MPEG-7 shape dataset, the proposed method outperforms existing methods; the average BEP of the proposed method is 57.69%, while the average BEPs of the invariant Zernike moments descriptor and the traditional ART are 41.64% and 36.51%, respectively.

  17. M-Adapting Low Order Mimetic Finite Differences for Dielectric Interface Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan A.; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-03-07

    We consider a problem of reducing numerical dispersion for electromagnetic wave in the domain with two materials separated by a at interface in 2D with a factor of two di erence in wave speed. The computational mesh in the homogeneous parts of the domain away from the interface consists of square elements. Here the method construction is based on m-adaptation construction in homogeneous domain that leads to fourth-order numerical dispersion (vs. second order in non-optimized method). The size of the elements in two domains also di ers by a factor of two, so as to preserve the same value ofmore » Courant number in each. Near the interface where two meshes merge the mesh with larger elements consists of degenerate pentagons. We demonstrate that prior to m-adaptation the accuracy of the method falls from second to rst due to breaking of symmetry in the mesh. Next we develop m-adaptation framework for the interface region and devise an optimization criteria. We prove that for the interface problem m-adaptation cannot produce increase in method accuracy. This is in contrast to homogeneous medium where m-adaptation can increase accuracy by two orders.« less

  18. Ambiguous taxa: Effects on the characterization and interpretation of invertebrate assemblages

    USGS Publications Warehouse

    Cuffney, T.F.; Bilger, Michael D.; Haigler, A.M.

    2007-01-01

    Damaged and immature specimens often result in macroinvertebrate data that contain ambiguous parent-child pairs (i.e., abundances associated with multiple related levels of the taxonomic hierarchy such as Baetis pluto and the associated ambiguous parent Baetis sp.). The choice of method used to resolve ambiguous parent-child pairs may have a very large effect on the characterization of invertebrate assemblages and the interpretation of responses to environmental change because very large proportions of taxa richness (73-78%) and abundance (79-91%) can be associated with ambiguous parents. To address this issue, we examined 16 variations of 4 basic methods for resolving ambiguous taxa: RPKC (remove parent, keep child), MCWP (merge child with parent), RPMC (remove parent or merge child with parent depending on their abundances), and DPAC (distribute parents among children). The choice of method strongly affected assemblage structure, assemblage characteristics (e.g., metrics), and the ability to detect responses along environmental (urbanization) gradients. All methods except MCWP produced acceptable results when used consistently within a study. However, the assemblage characteristics (e.g., values of assemblage metrics) differed widely depending on the method used, and data should not be combined unless the methods used to resolve ambiguous taxa are well documented and are known to be comparable. The suitability of the methods was evaluated and compared on the basis of 13 criteria that considered conservation of taxa richness and abundance, consistency among samples, methods, and studies, and effects on the interpretation of the data. Methods RPMC and DPAC had the highest suitability scores regardless of whether ambiguous taxa were resolved for each sample separately or for a group of samples. Method MCWP gave consistently poor results. Methods MCWP and DPAC approximate the use of family-level identifications and operational taxonomic units (OTU), respectively. Our results suggest that restricting identifications to the family level is not a good method of resolving ambiguous taxa, whereas generating OTUs works well provided that documentation issues are addressed. ?? 2007 by The North American Benthological Society.

  19. Validation of projective mapping as potential sensory screening tool for application by the honeybush herbal tea industry.

    PubMed

    Moelich, Erika Ilette; Muller, Magdalena; Joubert, Elizabeth; Næs, Tormod; Kidd, Martin

    2017-09-01

    Honeybush herbal tea is produced from the endemic South African Cyclopia species. Plant material subjected to a high-temperature oxidation step ("fermentation") forms the bulk of production. Production lags behind demand forcing tea merchants to use blends of available material to supply local and international markets. The distinct differences in the sensory profiles of the herbal tea produced from the different Cyclopia species require that special care is given to blending to ensure a consistent, high quality product. Although conventional descriptive sensory analysis (DSA) is highly effective in providing a detailed sensory profile of herbal tea infusions, industry requires a method that is more time- and cost-effective. Recent advances in sensory science have led to the development of rapid profiling methodologies. The question is whether projective mapping can successfully be used for the sensory characterisation of herbal tea infusions. Trained assessors performed global and partial projective mapping to determine the validity of this technique for the sensory characterisation of infusions of five Cyclopia species. Similar product configurations were obtained when comparing results of DSA and global and partial projective mapping. Comparison of replicate sessions showed RV coefficients >0.8. A similarity index, based on multifactor analysis, was calculated to determine assessor repeatability. Global projective mapping, demonstrated to be a valid method for providing a broad sensory characterisation of Cyclopia species, is thus suitable as a rapid quality control method of honeybush infusions. Its application by the honeybush industry could improve the consistency of the sensory profile of blended products. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Evaluation of a flow cytometry method to determine size and real refractive index distributions in natural marine particle populations.

    PubMed

    Agagliate, Jacopo; Röttgers, Rüdiger; Twardowski, Michael S; McKee, David

    2018-03-01

    A flow cytometric (FC) method was developed to retrieve particle size distributions (PSDs) and real refractive index (n r ) information in natural waters. Geometry and signal response of the sensors within the flow cytometer (CytoSense, CytoBuoy b.v., Netherlands) were characterized to form a scattering inversion model based on Mie theory. The procedure produced a mesh of diameter and n r isolines where each particle is assigned the diameter and n r values of the closest node, producing PSDs and particle real refractive index distributions. The method was validated using polystyrene bead standards of known diameter and polydisperse suspensions of oil with known n r , and subsequently applied to natural samples collected across a broad range of UK shelf seas. FC PSDs were compared with independent PSDs produced from data of two LISST-100X instruments (type B and type C). PSD slopes and features were found to be consistent between the FC and the two LISST-100X instruments, but LISST concentrations were found in disagreement with FC concentrations and with each other. FC n r values were found to agree with expected refractive index values of typical marine particle components across all samples considered. The determination of particle size and refractive index distributions enabled by the FC method has potential to facilitate identification of the contribution of individual subpopulations to the bulk inherent optical properties and biogeochemical properties of the particle population.

  1. Microporous Ti implant compact coated with hydroxyapatite produced by electro-discharge-sintering and electrostatic-spray-deposition.

    PubMed

    Jo, Y J; Kim, Y H; Jo, Y H; Seong, J G; Chang, S Y; Van Tyne, C J; Lee, W H

    2014-11-01

    A single pulse of 1.5 kJ/0.7 g of atomized spherical Ti powder from 300 μF capacitor was applied to produce the porous-surfaced Ti implant compact by electro-discharge-sintering (EDS). A solid core surrounded by porous layer was self-consolidated by a discharge in the middle of the compact in 122 μsec. Average pore size, porosity, and compressive yield strength of EDS Ti compact were estimated to be about 68.2 μm, 25.5%, and 266.4 MPa, respectively. Coatings with hydroxyapatite (HAp) on the Ti compact were conducted by electrostatic-spray-deposition (ESD) method. As-deposited HAp coating was in the form of porous structure and consisted of HAp particles which were uniformly distributed on the Ti porous structure. By heat-treatment at 700 degrees C, HAp particles were agglomerated each other and melted to form a highly smooth and homogeneous HAp thin film consisted of equiaxed nano-scaled grains. Porous-surfaced Ti implant compacts coated with highly crystalline apatite phase were successfully obtained by using the EDS and ESD techniques.

  2. Diffraction effects in mechanically chopped laser pulses

    NASA Astrophysics Data System (ADS)

    Gambhir, Samridhi; Singh, Mandip

    2018-06-01

    A mechanical beam chopper consists of a rotating disc of regularly spaced wide slits which allow light to pass through them. A continuous light beam, after passing through the rotating disc, is switched-on and switched-off periodically, and a series of optical pulses are produced. The intensity of each pulse is expected to rise and fall smoothly with time. However, a careful study has revealed that the edges of mechanically chopped laser light pulses consist of periodic intensity undulations which can be detected with a photo detector. In this paper, it is shown that the intensity undulations in mechanically chopped laser pulses are produced by diffraction of light from the rotating disc, and a detailed explanation is given of the intensity undulations in mechanically chopped laser pulses. An experiment presented in this paper provides an efficient method to capture a one dimensional diffraction profile of light from a straight sharp-edge in the time domain. In addition, the experiment accurately measures wavelengths of three different laser beams from the undulations in mechanically chopped laser light pulses.

  3. Too much ado about instrumental variable approach: is the cure worse than the disease?

    PubMed

    Baser, Onur

    2009-01-01

    To review the efficacy of instrumental variable (IV) models in addressing a variety of assumption violations to ensure standard ordinary least squares (OLS) estimates are consistent. IV models gained popularity in outcomes research because of their ability to consistently estimate the average causal effects even in the presence of unmeasured confounding. However, in order for this consistent estimation to be achieved, several conditions must hold. In this article, we provide an overview of the IV approach, examine possible tests to check the prerequisite conditions, and illustrate how weak instruments may produce inconsistent and inefficient results. We use two IVs and apply Shea's partial R-square method, the Anderson canonical correlation, and Cragg-Donald tests to check for weak instruments. Hall-Peixe tests are applied to see if any of these instruments are redundant in the analysis. A total of 14,952 asthma patients from the MarketScan Commercial Claims and Encounters Database were examined in this study. Patient health care was provided under a variety of fee-for-service, fully capitated, and partially capitated health plans, including preferred provider organizations, point of service plans, indemnity plans, and health maintenance organizations. We used controller-reliever copay ratio and physician practice/prescribing patterns as an instrument. We demonstrated that the former was a weak and redundant instrument producing inconsistent and inefficient estimates of the effect of treatment. The results were worse than the results from standard regression analysis. Despite the obvious benefit of IV models, the method should not be used blindly. Several strong conditions are required for these models to work, and each of them should be tested. Otherwise, bias and precision of the results will be statistically worse than the results achieved by simply using standard OLS.

  4. Large area and structured epitaxial graphene produced by confinement controlled sublimation of silicon carbide

    PubMed Central

    de Heer, Walt A.; Berger, Claire; Ruan, Ming; Sprinkle, Mike; Li, Xuebin; Hu, Yike; Zhang, Baiqian; Hankinson, John; Conrad, Edward

    2011-01-01

    After the pioneering investigations into graphene-based electronics at Georgia Tech, great strides have been made developing epitaxial graphene on silicon carbide (EG) as a new electronic material. EG has not only demonstrated its potential for large scale applications, it also has become an important material for fundamental two-dimensional electron gas physics. It was long known that graphene mono and multilayers grow on SiC crystals at high temperatures in ultrahigh vacuum. At these temperatures, silicon sublimes from the surface and the carbon rich surface layer transforms to graphene. However the quality of the graphene produced in ultrahigh vacuum is poor due to the high sublimation rates at relatively low temperatures. The Georgia Tech team developed growth methods involving encapsulating the SiC crystals in graphite enclosures, thereby sequestering the evaporated silicon and bringing growth process closer to equilibrium. In this confinement controlled sublimation (CCS) process, very high-quality graphene is grown on both polar faces of the SiC crystals. Since 2003, over 50 publications used CCS grown graphene, where it is known as the “furnace grown” graphene. Graphene multilayers grown on the carbon-terminated face of SiC, using the CCS method, were shown to consist of decoupled high mobility graphene layers. The CCS method is now applied on structured silicon carbide surfaces to produce high mobility nano-patterned graphene structures thereby demonstrating that EG is a viable contender for next-generation electronics. Here we present for the first time the CCS method that outperforms other epitaxial graphene production methods. PMID:21960446

  5. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  6. Advances in the control of wine spoilage by Zygosaccharomyces and Dekkera/Brettanomyces.

    PubMed

    Zuehlke, J M; Petrova, B; Edwards, C G

    2013-01-01

    Understanding the characteristics of yeast spoilage, as well as the available control technologies, is vital to producing consistent, high-quality wine. Zygosaccharomyces bailii contamination may result in refermentation and CO2 production in sweet wines or grape juice concentrate, whereas Brettanomyces bruxellensis spoilage often contributes off-odors and flavors to red wines. Early detection of these yeasts by selective/differential media or genetic methods is important to minimize potential spoilage. More established methods of microbial control include sulfur dioxide, dimethyl dicarbonate, and filtration. Current research is focused on the use of chitosan, pulsed electric fields, low electric current, and ultrasonics as means to protect wine quality.

  7. [A New Simple Technique for Producing Labeled Monoclonal Antibodies for Antibody Pair Screening in Sandwich-ELISA].

    PubMed

    Zaripov, M M; Afanasieva, G V; Glukhova, X A; Trizna, Y A; Glukhov, A S; Beletsky, I P; Prusakova, O V

    2015-01-01

    A simple and fast method for obtaining biotin-labeled monoclonal antibodies was developed usingcontent of hybridoma culture supernatant sufficient to select antibody pairs in sandwich ELISA. The method consists in chemical biotinylation of antigen-bound antibodies in a well of ELISA plate. Using as an example target Vaccinia virus A27L protein it was shown that the yield of biotinylated reactant is enough to set comprehensive sandwich ELISA for a moderate size panel of up to 25 monoclonal antibodies with an aim to determine candidate pairs. The technique is a cheap and effective solution since it avoids obtaining preparative amounts of antibodies.

  8. A general engineering scenario for concurrent engineering environments

    NASA Astrophysics Data System (ADS)

    Mucino, V. H.; Pavelic, V.

    The paper describes an engineering method scenario which categorizes the various activities and tasks into blocks seen as subjects which consume and produce data and information. These methods, tools, and associated utilities interact with other engineering tools by exchanging information in such a way that a relationship between customers and suppliers of engineering data is established clearly, while data exchange consistency is maintained throughout the design process. The events and data transactions are presented in the form of flowcharts in which data transactions represent the connection between the various bricks, which in turn represent the engineering activities developed for the particular task required in the concurrent engineering environment.

  9. Off disk-center potential field calculations using vector magnetograms

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, P.; Gary, G. Allen

    1989-01-01

    A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.

  10. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  11. Determination of relative ion chamber calibration coefficients from depth-ionization measurements in clinical electron beams

    NASA Astrophysics Data System (ADS)

    Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.

    2014-10-01

    A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.

  12. Building a composite score of general practitioners' intrinsic motivation: a comparison of methods.

    PubMed

    Sicsic, Jonathan; Le Vaillant, Marc; Franc, Carine

    2014-04-01

    Pay-for-performance programmes have been widely implemented in primary care, but few studies have investigated their potential adverse effects on the intrinsic motivation of general practitioners (GPs) even though intrinsic motivation may be a key determinant of quality in health care. Our aim was to compare methods for developing a composite score of GPs' intrinsic motivation and to select one that is most consistent with self-reported data. A postal survey. French GPs practicing in private practice. Using a set of variables selected to characterize the dimensions of intrinsic motivation, three alternative composite scores were calculated based on a multiple correspondence analysis (MCA), a confirmatory factor analysis (CFA) and a two-parameter logistic model (2-PLM). Weighted kappa coefficients were used to evaluate variation in GPs' ranks according to each method. The three methods produced similar results on both the estimation of the indicators' weights and the order of GP rank lists. All weighted kappa coefficients were >0.80. The CFA and 2-PLM produced the most similar results. There was little difference regarding the three methods' results, validating our measure of GPs' intrinsic motivation. The 2-PLM appeared theoretically and empirically more robust for establishing the intrinsic motivation score. Code JEL C38, C43, I18.

  13. Assessing the accuracy of cranial and pelvic ageing methods on human skeletal remains from a modern Greek assemblage.

    PubMed

    Xanthopoulou, Panagiota; Valakos, Efstratios; Youlatos, Dionisios; Nikita, Efthymia

    2018-05-01

    The present study tests the accuracy of commonly adopted ageing methods based on the morphology of the pubic symphysis, auricular surface and cranial sutures. These methods are examined both in their traditional form as well as in the context of transition analysis using the ADBOU software in a modern Greek documented collection consisting of 140 individuals who lived mainly in the second half of the twentieth century and come from cemeteries in the area of Athens. The auricular surface overall produced the most accurate age estimates in our material, with different methods based on this anatomical area showing varying degrees of success for different age groups. The pubic symphysis produced accurate results primarily for young adults and the same applied to cranial sutures but the latter appeared completely inappropriate for older individuals. The use of transition analysis through the ADBOU software provided less accurate results than the corresponding traditional ageing methods in our sample. Our results are in agreement with those obtained from validation studies based on material from across the world, but certain differences identified with other studies on Greek material highlight the importance of taking into account intra- and inter-population variability in age estimation. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Improved search for a Higgs boson produced in association with Z → l+ l- in pp collisions sqrt[s] = 1.96 TeV.

    PubMed

    Aaltonen, T; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bauer, G; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Bland, K R; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Brisuda, A; Bromberg, C; Brucken, E; Bucciantonio, M; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; De Cecco, S; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hidas, D; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, Q; Liu, T; Lockwitz, S; Lockyer, N S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Fernandez, P Movilla; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Griso, S Pagan; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shreyber, I; Simonenko, A; Sinervo, P; Sissakian, A; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamaoka, J; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zucchelli, S

    2010-12-17

    We search for the standard model Higgs boson produced with a Z boson in 4.1 fb(-1) of integrated luminosity collected with the CDF II detector at the Tevatron. In events consistent with the decay of the Higgs boson to a bottom-quark pair and the Z boson to electrons or muons, we set 95% credibility level upper limits on the ZH production cross section multiplied by the H → bb branching ratio. Improved analysis methods enhance signal sensitivity by 20% relative to previous searches. At a Higgs boson mass of 115 GeV/c2 we set a limit of 5.9 times the standard model cross section.

  15. Method to produce furandicarboxylic acid (FDCA) from 5-hydroxymethylfurfural (HMF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumesic, James A.; Motagamwala, Ali Hussain

    A process to produce furandicarboxylic acid (FDCA). The process includes the steps of reacting a C6 sugar-containing reactant in a reaction solution comprising a first organic solvent selected from the group consisting of beta-, gamma-, and delta-lactones, hydrofurans, hydropyrans, and combinations thereof, in the presence of an acid catalyst for a time and under conditions wherein at least a portion of the C6 sugar present in the reactant is converted to 5-(hydroxymethyl)furfural (HMF); oxidizing the HMF into FDCA with or without separating the HMF from the reaction solution; and extracting the FDCA by adding an aprotic organic solvent having amore » dipole moment of about 1.0 D or less to the reaction solution.« less

  16. FEM: Feature-enhanced map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat

    A method is presented that modifies a 2 m F obs- D F modelσ A-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretabilitymore » and decreased model bias compared with the starting 2 m F obs- D F modelσ A-weighted map.« less

  17. Electronic structure of the Cu + impurity center in sodium chloride

    NASA Astrophysics Data System (ADS)

    Chermette, H.; Pedrini, C.

    1981-08-01

    The multiple-scattering Xα method is used to describe the electronic structure of Cu+ in sodium chloride. Several improvements are brought to the conventional Xα calculation. In particular, the cluster approximation is used by taking into account external lattice potential. The ''transition state'' procedure is applied in order to get the various multiplet levels. The fine electronic structure of the impurity centers is obtained after a calculation of the spin-orbit interactions. These results are compared with those given by a modified charge-consistent extended Hückel method (Fenske-type calculation) and the merit of each method is discussed. The present calculation produces good quantitative agreement with experiment concerning mainly the optical excitations and the emission mechanism of the Cu+ luminescent centers in NaCl.

  18. The protonation of N2O reexamined - A case study on the reliability of various electron correlation methods for minima and transition states

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.; Lee, Timothy J.

    1993-01-01

    The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.

  19. FEM: feature-enhanced map

    PubMed Central

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat; Sobolev, Oleg V.; Terwilliger, Thomas C.; Turk, Dusan; Urzhumtsev, Alexandre; Adams, Paul D.

    2015-01-01

    A method is presented that modifies a 2m F obs − D F model σA-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretability and decreased model bias compared with the starting 2m F obs − D F model σA-weighted map. PMID:25760612

  20. FEM: Feature-enhanced map

    DOE PAGES

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat; ...

    2015-02-26

    A method is presented that modifies a 2 m F obs- D F modelσ A-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretabilitymore » and decreased model bias compared with the starting 2 m F obs- D F modelσ A-weighted map.« less

  1. Identifying people from gait pattern with accelerometers

    NASA Astrophysics Data System (ADS)

    Ailisto, Heikki J.; Lindholm, Mikko; Mantyjarvi, Jani; Vildjiounaite, Elena; Makela, Satu-Marja

    2005-03-01

    Protecting portable devices is becoming more important, not only because of the value of the devices themselves, but for the value of the data in them and their capability for transactions, including m-commerce and m-banking. An unobtrusive and natural method for identifying the carrier of portable devices is presented. The method uses acceleration signals produced by sensors embedded in the portable device. When the user carries the device, the acceleration signal is compared with the stored template signal. The method consists of finding individual steps, normalizing and averaging them, aligning them with the template and computing cross-correlation, which is used as a measure of similarity. Equal Error Rate of 6.4% is achieved in tentative experiments with 36 test subjects.

  2. IRRADIATION METHOD OF CONVERTING ORGANIC COMPOUNDS

    DOEpatents

    Allen, A.O.; Caffrey, J.M. Jr.

    1960-10-11

    A method is given for changing the distribution of organic compounds from that produced by the irradiation of bulk alkane hydrocarbons. This method consists of depositing an alkane hydrocarbon on the surface of a substrate material and irradiating with gamma radiation at a dose rate of more than 100,000 rads. The substrate material may be a metal, metal salts, metal oxides, or carbons having a surface area in excess of 1 m/sup 2//g. The hydrocarbons are deposited in layers of from 0.1 to 10 monolayers on the surfaces of these substrates and irradiated. The product yields are found to vary from those which result from the irradiation of bulk hydrocarbons in that there is an increase in the quantity of branched hydrocarbons.

  3. UNCLES: method for the identification of genes differentially consistently co-expressed in a specific subset of datasets.

    PubMed

    Abu-Jamous, Basel; Fa, Rui; Roberts, David J; Nandi, Asoke K

    2015-06-04

    Collective analysis of the increasingly emerging gene expression datasets are required. The recently proposed binarisation of consensus partition matrices (Bi-CoPaM) method can combine clustering results from multiple datasets to identify the subsets of genes which are consistently co-expressed in all of the provided datasets in a tuneable manner. However, results validation and parameter setting are issues that complicate the design of such methods. Moreover, although it is a common practice to test methods by application to synthetic datasets, the mathematical models used to synthesise such datasets are usually based on approximations which may not always be sufficiently representative of real datasets. Here, we propose an unsupervised method for the unification of clustering results from multiple datasets using external specifications (UNCLES). This method has the ability to identify the subsets of genes consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets, and to identify the subsets of genes consistently co-expressed in all given datasets. We also propose the M-N scatter plots validation technique and adopt it to set the parameters of UNCLES, such as the number of clusters, automatically. Additionally, we propose an approach for the synthesis of gene expression datasets using real data profiles in a way which combines the ground-truth-knowledge of synthetic data and the realistic expression values of real data, and therefore overcomes the problem of faithfulness of synthetic expression data modelling. By application to those datasets, we validate UNCLES while comparing it with other conventional clustering methods, and of particular relevance, biclustering methods. We further validate UNCLES by application to a set of 14 real genome-wide yeast datasets as it produces focused clusters that conform well to known biological facts. Furthermore, in-silico-based hypotheses regarding the function of a few previously unknown genes in those focused clusters are drawn. The UNCLES method, the M-N scatter plots technique, and the expression data synthesis approach will have wide application for the comprehensive analysis of genomic and other sources of multiple complex biological datasets. Moreover, the derived in-silico-based biological hypotheses represent subjects for future functional studies.

  4. Prediction of gas production using well logs, Cretaceous of north-central Montana

    USGS Publications Warehouse

    Hester, T.C.

    1999-01-01

    Cretaceous gas sands underlie much of east-central Alberta and southern Saskatchewan, eastern Montana, western North Dakota, and parts of South Dakota and Wyoming. Estimates of recoverable biogenic methane from these rocks in the United States are as high as 91 TCF. In northern Montana, current production is localized around a few major structural features, while vast areas in between these structures are not being exploited. Although the potential for production exists, the lack of commercial development is due to three major factors: 1) the lack of pipeline infrastructure; 2) the lack of predictable and reliable rates of production; and 3) the difficulty in recognizing and selecting potentially productive gas-charged intervals. Unconventional (tight), continuous-type reservoirs, such as those in the Cretaceous of the northern Great Plains, are not well suited for conventional methods of formation evaluation. Pay zones frequently consist only of thinly laminated intervals of sandstone, silt, shale stringers, and disseminated clay. Potential producing intervals are commonly unrecognizable on well logs, and thus are overlooked. To aid in the identification and selection of potential producing intervals, a calibration system is developed here that empirically links the 'gas effect' to gas production. The calibration system combines the effects of porosity, water saturation, and clay content into a single 'gas-production index' (GPI) that relates the in-situ rock with production potential. The fundamental method for isolating the gas effect for calibration is a crossplot of neutron porosity minus density porosity vs gamma-ray intensity. Well-log and gas-production data used for this study consist of 242 perforated intervals from 53 gas-producing wells. Interval depths range from about 250 to 2400 ft. Gas volumes in the peak calendar year of production range from about 4 to 136 MMCF. Nine producing formations are represented. Producing-interval data show that porosity and gas production are closely linked to clay volume. Highest porosities and maximum gas production occur together at an intermediate clay content of about 12% (60 API). As clay volume exceeds 35% (130 API), minimum porosity required for production increases rapidly, and the number of potential producing intervals declines. Gas production from intervals where clay volume exceeds 50% is rare. Effective porosities of less than about 8% are probably inadequate for commercial gas production in these rocks regardless of clay content.

  5. Impervious surface mapping with Quickbird imagery

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio

    2010-01-01

    This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434

  6. Activation of mesocorticolimbic reward circuits for assessment of relief of ongoing pain: a potential biomarker of efficacy.

    PubMed

    Xie, Jennifer Y; Qu, Chaoling; Patwardhan, Amol; Ossipov, Michael H; Navratilova, Edita; Becerra, Lino; Borsook, David; Porreca, Frank

    2014-08-01

    Preclinical assessment of pain has increasingly explored operant methods that may allow behavioral assessment of ongoing pain. In animals with incisional injury, peripheral nerve block produces conditioned place preference (CPP) and activates the mesolimbic dopaminergic reward pathway. We hypothesized that activation of this circuit could serve as a neurochemical output measure of relief of ongoing pain. Medications commonly used clinically, including gabapentin and nonsteroidal anti-inflammatory drugs (NSAIDs), were evaluated in models of post-surgical (1 day after incision) or neuropathic (14 days after spinal nerve ligation [SNL]) pain to determine whether the clinical efficacy profile of these drugs in these pain conditions was reflected by extracellular dopamine (DA) release in the nucleus accumbens (NAc) shell. Microdialysis was performed in awake rats. Basal DA levels were not significantly different between experimental groups, and no significant treatment effects were seen in sham-operated animals. Consistent with clinical observation, spinal clonidine produced CPP and produced a dose-related increase in net NAc DA release in SNL rats. Gabapentin, commonly used to treat neuropathic pain, produced increased NAc DA in rats with SNL but not in animals with incisional, injury. In contrast, ketorolac or naproxen produced increased NAc DA in animals with incisional but not neuropathic pain. Increased extracellular NAc DA release was consistent with CPP and was observed selectively with treatments commonly used clinically for post-surgical or neuropathic pain. Evaluation of NAc DA efflux in animal pain models may represent an objective neurochemical assay that may serve as a biomarker of efficacy for novel pain-relieving mechanisms. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  7. Computerized tomography with total variation and with shearlets

    NASA Astrophysics Data System (ADS)

    Garduño, Edgar; Herman, Gabor T.

    2017-04-01

    To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.

  8. State estimation improves prospects for ocean research

    NASA Astrophysics Data System (ADS)

    Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.

    Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.

  9. Preparation and composition of superconducting copper oxides based on Ga-O layers

    DOEpatents

    Dabrowski, B.; Vaughey, J.T.; Poeppelmeier, K.R.

    1994-12-20

    A high temperature superconducting material with the general formula GaSr[sub 2]Ln[sub 1[minus]x]M[sub x]Cu[sub 2]O[sub 7[+-]w] wherein Ln is selected from the group consisting of La, Ce, Pt, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb and Y and M is selected from the group consisting of C and Sr, 0.2[<=]x[<=]0.4 and w is a small fraction of one. A method of preparing this high temperature superconducting material is provided which includes heating and cooling a mixture to produce a crystalline material which is subsequently fired, ground and annealed at high pressure and temperature in oxygen to establish superconductivity. 14 figures.

  10. Microscopy with multimode fibers

    NASA Astrophysics Data System (ADS)

    Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri

    2013-04-01

    Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.

  11. Comparison of blueberry powder produced via foam-mat freeze-drying versus spray-drying: evaluation of foam and powder properties.

    PubMed

    Darniadi, Sandi; Ho, Peter; Murray, Brent S

    2018-03-01

    Blueberry juice powder was developed via foam-mat freeze-drying (FMFD) and spray-drying (SD) via addition of maltodextrin (MD) and whey protein isolate (WPI) at weight ratios of MD/WPI = 0.4 to 3.2 (with a fixed solids content of 5 wt% for FMFD and 10 wt% for SD). Feed rates of 180 and 360 mL h -1 were tested in SD. The objective was to evaluate the effect of the drying methods and carrier agents on the physical properties of the corresponding blueberry powders and reconstituted products. Ratios of MD/WPI = 0.4, 1.0 and 1.6 produced highly stable foams most suitable for FMFD. FMFD gave high yields and low bulk density powders with flake-like particles of large size that were also dark purple with high red values. SD gave low powder recoveries. The powders had higher bulk density and faster rehydration times, consisting of smooth, spherical and smaller particles than in FMFD powders. The SD powders were bright purple but less red than FMFD powders. Solubility was greater than 95% for both FMFD and SD powders. The FMFD method is a feasible method of producing blueberry juice powder and gives products retaining more characteristics of the original juice than SD. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  12. Development and Implementation of a Coagulation Factor Testing Method Utilizing Autoverification in a High-volume Clinical Reference Laboratory Environment

    PubMed Central

    Riley, Paul W.; Gallea, Benoit; Valcour, Andre

    2017-01-01

    Background: Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. Methods: The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Results: Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. Conclusions: To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process. PMID:28706751

  13. Adeno-associated virus vectors can be efficiently produced without helper virus.

    PubMed

    Matsushita, T; Elliger, S; Elliger, C; Podsakoff, G; Villarreal, L; Kurtzman, G J; Iwaki, Y; Colosi, P

    1998-07-01

    The purpose of this work was to develop an efficient method for the production of adeno-associated virus (AAV) vectors in the absence of helper virus. The adenovirus regions that mediate AAV vector replication were identified and assembled into a helper plasmid. These included the VA, E2A and E4 regions. When this helper plasmid was cotransfected into 293 cells, along with plasmids encoding the AAV vector, and rep and cap genes, AAV vector was produced as efficiently as when using adenovirus infection as a source of help. CMV-driven constructs expressing the E4orf6 and the 72-M(r), E2A proteins were able to functionally replace the E4 and E2A regions, respectively. Therefore the minimum set of genes required to produce AAV helper activity equivalent to that provided by adenovirus infection consists of, or is a subset of, the following genes: the E4orf6 gene, the 72-M(r), E2A protein gene, the VA RNA genes and the E1 region. AAV vector preparations made with adenovirus and by the helper virus-free method were essentially indistinguishable with respect to particle density, particle to infectivity ratio, capsimer ratio and efficiency of muscle transduction in vivo. Only AAV vector preparations made by the helper virus-free method were not reactive with anti-adenovirus sera.

  14. Optimal Control Based Stiffness Identification of an Ankle-Foot Orthosis Using a Predictive Walking Model

    PubMed Central

    Sreenivasa, Manish; Millard, Matthew; Felis, Martin; Mombaur, Katja; Wolf, Sebastian I.

    2017-01-01

    Predicting the movements, ground reaction forces and neuromuscular activity during gait can be a valuable asset to the clinical rehabilitation community, both to understand pathology, as well as to plan effective intervention. In this work we use an optimal control method to generate predictive simulations of pathological gait in the sagittal plane. We construct a patient-specific model corresponding to a 7-year old child with gait abnormalities and identify the optimal spring characteristics of an ankle-foot orthosis that minimizes muscle effort. Our simulations include the computation of foot-ground reaction forces, as well as the neuromuscular dynamics using computationally efficient muscle torque generators and excitation-activation equations. The optimal control problem (OCP) is solved with a direct multiple shooting method. The solution of this problem is physically consistent synthetic neural excitation commands, muscle activations and whole body motion. Our simulations produced similar changes to the gait characteristics as those recorded on the patient. The orthosis-equipped model was able to walk faster with more extended knees. Notably, our approach can be easily tuned to simulate weakened muscles, produces physiologically realistic ground reaction forces and smooth muscle activations and torques, and can be implemented on a standard workstation to produce results within a few hours. These results are an important contribution toward bridging the gap between research methods in computational neuromechanics and day-to-day clinical rehabilitation. PMID:28450833

  15. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    PubMed

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  16. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  17. Improving the complementary methods to estimate evapotranspiration under diverse climatic and physical conditions

    NASA Astrophysics Data System (ADS)

    Anayah, F. M.; Kaluarachchi, J. J.

    2014-06-01

    Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.

  18. Controlling protected designation of origin of wine by Raman spectroscopy.

    PubMed

    Mandrile, Luisa; Zeppa, Giuseppe; Giovannozzi, Andrea Mario; Rossi, Andrea Mario

    2016-11-15

    In this paper, a Fourier Transform Raman spectroscopy method, to authenticate the provenience of wine, for food traceability applications was developed. In particular, due to the specific chemical fingerprint of the Raman spectrum, it was possible to discriminate different wines produced in the Piedmont area (North West Italy) in accordance with i) grape varieties, ii) production area and iii) ageing time. In order to create a consistent training set, more than 300 samples from tens of different producers were analyzed, and a chemometric treatment of raw spectra was applied. A discriminant analysis method was employed in the classification procedures, providing a classification capability (percentage of correct answers) of 90% for validation of grape analysis and geographical area provenance, and a classification capability of 84% for ageing time classification. The present methodology was applied successfully to raw materials without any preliminary treatment of the sample, providing a response in a very short time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method

    NASA Astrophysics Data System (ADS)

    DeLeon, Rey; Sandusky, Micah; Senocak, Inanc

    2018-02-01

    We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.

  20. Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method

    NASA Astrophysics Data System (ADS)

    DeLeon, Rey; Sandusky, Micah; Senocak, Inanc

    2018-06-01

    We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.

  1. Polypropylene Oil as a Fuel for Ni-YSZ | YSZ | LSCF Solid Oxide Fuel Cell

    NASA Astrophysics Data System (ADS)

    Pratiwi, Andini W.; Rahmawati, Fitria; Rochman, Refada A.; Syahputra, Rahmat J. E.; Prameswari, Arum P.

    2018-01-01

    This research aims to convert polypropylene plastic to polypropylene oil through pyrolysis method and use the polypropylene oil as fuel for Solid Oxide Fuel Cell, SOFC, to produce electricity. The material for SOFC single cell are Ni-YSZ, YSZ, and LSCF as anode, electrolyte and cathode, respectively. YSZ is yttria-stabilized-zirconia. Meanwhile, LSCF is a commercial La0.6Sr0.4Co0.2Fe0.8O3. The Ni-YSZ is a composite of YSZ with nickel powder. LSCF and Ni-YSZ slurry coated both side of YSZ electrolyte pellet through screen printing method. The result shows that, the produced polypropylene oil consist of C8 to C27 hydrocarbon chain. Meanwhile, a single cell performance test at 673 K, 773 K and 873 K with polypropylene oil as fuel, found that the maximum power density is 1.729 μW. cm-2 at 673 K with open circuit voltage value of 9.378 mV.

  2. Inferring bread doneness with air-pulse/ultrasonic ranging measurements of the loaf elastic response

    NASA Astrophysics Data System (ADS)

    Faeth, Loren Elbert

    This research marks the discovery of a method by which bread doneness may be determined based on the elastic properties of the loaf as it bakes. The purpose of the study was to determine if changes in bread characteristics could be determined by non-contact methods during baking, as the basis for improved control of the baking process. Current control of the baking process is based on temperature and dwell time, which are determined by experience to produce a produce which is approximately ``done.'' There is no direct measurement of the property of interest, doneness. An ultrasonic measurement system was developed to measure the response of the loaf to an external stimulus. ``Doneness,'' as reflected in the internal elastic consistency of the bakery product, is assessed in less than 1/2 second, and requires no closer approach to the moving bakery product than about 2 inches. The system is designed to be compatible with strapped bread pans in a standard traveling-tray commercial oven.

  3. YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin

    2018-06-01

    In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.

  4. Consistent Condom Use Increases the Colonization of Lactobacillus crispatus in the Vagina

    PubMed Central

    Ma, Liyan; Lv, Zhi; Su, Jianrong; Wang, Jianjie; Yan, Donghui; Wei, Jingjuan; Pei, Shuang

    2013-01-01

    Background Non-hormonal contraception methods have been widely used, but their effects on colonization by vaginal lactobacilli remain unclear. Objective To determine the association between non-hormonal contraception methods and vaginal lactobacilli on women’s reproductive health. Methods The cross-sectional study included 164 healthy women between 18–45 years of age. The subjects were divided into different groups on the basis of the different non-hormonal contraception methods used by them. At the postmenstrual visit (day 21 or 22 of the menstrual cycle), vaginal swabs were collected for determination of Nugent score, quantitative culture and real-time polymerase chain reaction (PCR) of vaginal lactobacilli. The prevalence, colony counts and 16S rRNA gene expression of the Lactobacillus strains were compared between the different groups by Chi-square and ANOVA statistical analysis methods. Results A Nugent score of 0–3 was more common in the condom group (93.1%) than in the group that used an interuterine device(IUD) (75.4%), (p = 0.005). The prevalence of H2O2-producing Lactobacillus was significantly higher in the condom group (82.3%) than in the IUD group (68.2%), (p = 0.016). There was a significant difference in colony count (mean ± standard error (SE), log10colony forming unit (CFU)/ml) of H2O2-producing Lactobacillus between condom users (7.81±0.14) and IUD users (6.54±0.14), (p = 0.000). The 16S rRNA gene expression (mean ± SE, log10copies/ml) of Lactobacillus crispatus was significantly higher in the condom group (8.09±0.16) than in the IUD group (6.03±0.18), (p = 0.000). Conclusion Consistent condom use increases the colonization of Lactobacillus crispatus in the vagina and may protect against both bacterial vaginosis (BV) and human immunodeficiency virus (HIV). PMID:23894682

  5. Spectral Cauchy Characteristic Extraction: Gravitational Waves and Gauge Free News

    NASA Astrophysics Data System (ADS)

    Handmer, Casey; Szilagyi, Bela; Winicour, Jeff

    2015-04-01

    We present a fast, accurate spectral algorithm for the characteristic evolution of the full non-linear vacuum Einstein field equations in the Bondi framework. Developed within the Spectral Einstein Code (SpEC), we demonstrate how spectral Cauchy characteristic extraction produces gravitational News without confounding gauge effects. We explain several numerical innovations and demonstrate speed, stability, accuracy, exponential convergence, and consistency with existing methods. We highlight its capability to deliver physical insights in the study of black hole binaries.

  6. Microgravity processing of particulate reinforced metal matrix composites

    NASA Technical Reports Server (NTRS)

    Morel, Donald E.; Stefanescu, Doru M.; Curreri, Peter A.

    1989-01-01

    The elimination of such gravity-related effects as buoyancy-driven sedimentation can yield more homogeneous microstructures in composite materials whose individual constituents have widely differing densities. A comparison of composite samples consisting of particulate ceramics in a nickel aluminide matrix solidified under gravity levels ranging from 0.01 to 1.8 G indicates that the G force normal to the growth direction plays a fundamental role in determining the distribution of the reinforcement in the matrix. Composites with extremely uniform microstructures can be produced by these methods.

  7. Multi-assortment rhythmic production planning and control

    NASA Astrophysics Data System (ADS)

    Skolud, B.; Krenczyk, D.; Zemczak, M.

    2015-11-01

    A method for production planning in a repetitive manufacturing system which allows for estimating the possibility of processing work orders in due time is presented. The difference between two approaches are presented; the first one one-piece flow elaborated in Toyota and the second one elaborated by authors that consists in defining sufficient conditions to filter all solutions and providing a set of admissible solutions for both the client and the producer. In the paper attention is focused on the buffer allocation. Illustrative examples are presented.

  8. Defense Small Business Innovation Research Program (SBIR). Volume 3. Air Force Abstracts of Phase 1 Awards

    DTIC Science & Technology

    1990-01-01

    THERE WILL BE A CONTINUING NEED FOR A SENSITIVE, RAPID, AND ECONOMICAL TESTING PROCEDURE CAPABLE OF DETECTING DEFECTS AND PROVIDING FEEDBACK FOR QUALITY...SOLUTIONS. THE DKF METHOD PROVIDES OPTIMAL OR NEAR-OPTIMAL ACCURACY, REDUCE PROCESSING BURDEN, AND IMPROVE FAULT TOLERANCE. THE DKF/MMAE ( DMAE ) TECHNIQUES...DEVICES FOR B-SiC IS TO BE ABLE TO CONSISTENTLY PRODUCE INTRINSIC FILMS WITH VERY LOW DEFECTS AND TO DEVELOP SCHOTTKY AND OHMIC CONTACT MATERIALS THAT WILL

  9. The impact of fatigue on latent print examinations as revealed by behavioral and eye gaze testing.

    PubMed

    Busey, Thomas; Swofford, Henry J; Vanderkolk, John; Emerick, Brandi

    2015-06-01

    Eye tracking and behavioral methods were used to assess the effects of fatigue on performance in latent print examiners. Eye gaze was measured both before and after a fatiguing exercise involving fine-grained examination decisions. The eye tracking tasks used similar images, often laterally reversed versions of previously viewed prints, which holds image detail constant while minimizing prior recognition. These methods, as well as a within-subject design with fine grained analyses of the eye gaze data, allow fairly strong conclusions despite a relatively small subject population. Consistent with the effects of fatigue on practitioners in other fields such as radiology, behavioral performance declined with fatigue, and the eye gaze statistics suggested a smaller working memory capacity. Participants also terminated the search/examination process sooner when fatigued. However, fatigue did not produce changes in inter-examiner consistency as measured by the Earth Mover Metric. Implications for practice are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Automatic load forecasting. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, D.J.; Vemuri, S.

    A method which lends itself to on-line forecasting of hourly electric loads is presented and the results of its use are compared to models developed using the Box-Jenkins method. The method consists of processing the historical hourly loads with a sequential least-squares estimator to identify a finite order autoregressive model which in turn is used to obtain a parsimonious autoregressive-moving average model. A procedure is also defined for incorporating temperature as a variable to improve forecasts where loads are temperature dependent. The method presented has several advantages in comparison to the Box-Jenkins method including much less human intervention and improvedmore » model identification. The method has been tested using three-hourly data from the Lincoln Electric System, Lincoln, Nebraska. In the exhaustive analyses performed on this data base this method produced significantly better results than the Box-Jenkins method. The method also proved to be more robust in that greater confidence could be placed in the accuracy of models based upon the various measures available at the identification stage.« less

  11. Evaluation of an Improved U.S. Food and Drug Administration Method for the Detection of Cyclospora cayetanensis in Produce Using Real-Time PCR.

    PubMed

    Murphy, Helen R; Lee, Seulgi; da Silva, Alexandre J

    2017-07-01

    Cyclospora cayetanensis is a protozoan parasite that causes human diarrheal disease associated with the consumption of fresh produce or water contaminated with C. cayetanensis oocysts. In the United States, foodborne outbreaks of cyclosporiasis have been linked to various types of imported fresh produce, including cilantro and raspberries. An improved method was developed for identification of C. cayetanensis in produce at the U.S. Food and Drug Administration. The method relies on a 0.1% Alconox produce wash solution for efficient recovery of oocysts, a commercial kit for DNA template preparation, and an optimized TaqMan real-time PCR assay with an internal amplification control for molecular detection of the parasite. A single laboratory validation study was performed to assess the method's performance and compare the optimized TaqMan real-time PCR assay and a reference nested PCR assay by examining 128 samples. The samples consisted of 25 g of cilantro or 50 g of raspberries seeded with 0, 5, 10, or 200 C. cayetanensis oocysts. Detection rates for cilantro seeded with 5 and 10 oocysts were 50.0 and 87.5%, respectively, with the real-time PCR assay and 43.7 and 94.8%, respectively, with the nested PCR assay. Detection rates for raspberries seeded with 5 and 10 oocysts were 25.0 and 75.0%, respectively, with the real-time PCR assay and 18.8 and 68.8%, respectively, with the nested PCR assay. All unseeded samples were negative, and all samples seeded with 200 oocysts were positive. Detection rates using the two PCR methods were statistically similar, but the real-time PCR assay is less laborious and less prone to amplicon contamination and allows monitoring of amplification and analysis of results, making it more attractive to diagnostic testing laboratories. The improved sample preparation steps and the TaqMan real-time PCR assay provide a robust, streamlined, and rapid analytical procedure for surveillance, outbreak response, and regulatory testing of foods for detection of C. cayetanensis.

  12. Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks at ATLAS

    NASA Astrophysics Data System (ADS)

    Cheng, Yangyang

    This thesis presents a search for dark matter production in association with a Higgs boson decaying to a pair of bottom quarks, using data from 20.3 fb-1 of proton-proton collisions at a center-of-mass energy of 8 TeV collected by the ATLAS detector at the LHC. The dark matter particles are assumed to be Weakly Interacting Massive Particles, and can be produced in pairs at collider experiments. Events with large missing transverse energy are selected when produced in association with high momentum jets, of which at least two are identified as jets containing b-quarks consistent with those from a Higgs boson decay. To maintain good detector acceptance and selection efficiency of the signal across a wide kinematic range, two methods of Higgs boson reconstruction are used. The Higgs boson is reconstructed either as a pair of small-radius jets both containing b-quarks, called the "resolved'' analysis, or as a single large-radius jet with substructure consistent with a high momentum b b system, called the "boosted'' analysis. The resolved analysis is the focus of this thesis. The observed data are found to be consistent with the expected Standard Model backgrounds. The result from the resolved analysis is interpreted using a simplified model with a Z' gauge boson decaying into different Higgs bosons predicted in a two-Higgs-doublet model, of which the heavy pseudoscalar Higgs decays into a pair of dark matter particles. Exclusion limits are set in regions of parameter space for this model. Model-independent upper limits are also placed on the visible cross-sections for events with a Higgs boson decaying into bb and large missing transverse momentum with thresholds ranging from 150 GeV to 400 GeV.

  13. Agricultural Production. Ohio's Competency Analysis Profile.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This list consists of essential competencies from the following specialized Ohio Competency Analysis Profiles: Beef and Sheep Producers; Crop Producer; Dairy Producer; Poultry Producer; and Swine Producer. Developed through a modified DACUM (Developing a Curriculum) process involving business, industry, labor, and community agency representatives…

  14. Novel wine yeast with mutations in YAP1 that produce less acetic acid during fermentation.

    PubMed

    Cordente, Antonio G; Cordero-Bueso, Gustavo; Pretorius, Isak S; Curtin, Christopher D

    2013-02-01

    Acetic acid, a byproduct formed during yeast alcoholic fermentation, is the main component of volatile acidity (VA). When present in high concentrations in wine, acetic acid imparts an undesirable 'vinegary' character that results in a significant reduction in quality and sales. Previously, it has been shown that saké yeast strains resistant to the antifungal cerulenin produce significantly lower levels of VA. In this study, we used a classical mutagenesis method to isolate a series of cerulenin-resistant strains, derived from a commercial diploid wine yeast. Four of the selected strains showed a consistent low-VA production phenotype after small-scale fermentation of different white and red grape musts. Specific mutations in YAP1, a gene encoding a transcription factor required for oxidative stress tolerance, were found in three of the four low-VA strains. When integrated into the genome of a haploid wine strain, the mutated YAP1 alleles partially reproduced the low-VA production phenotype of the diploid cerulenin-resistant strains, suggesting that YAP1 might play a role in (regulating) acetic acid production during fermentation. This study offers prospects for the development of low-VA wine yeast starter strains that could assist winemakers in their effort to consistently produce wine to definable quality specifications. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  15. Simultaneous penile-vaginal intercourse orgasm is associated with satisfaction (sexual, life, partnership, and mental health).

    PubMed

    Brody, Stuart; Weiss, Petr

    2011-03-01

    Previous multivariate research found that satisfaction was associated positively with frequency of specifically penile-vaginal intercourse (PVI; as opposed to other sexual activities) as well as with vaginal orgasm. The contribution to satisfaction of simultaneous orgasm produced by PVI merited direct examination in a large representative sample. To examine the associations of aspects of satisfaction (sexual, life, own mental health, partner relationship) with consistency of simultaneous orgasm produced by PVI (as well as with PVI frequency and vaginal orgasm consistency). A representative sample of Czechs (N = 1,570) aged 35-65 years completed a survey on aspects of satisfaction, PVI frequency, vaginal orgasm consistency, and consistency of simultaneous orgasm produced by PVI (the latter being a specially timed version of vaginal orgasm for women). Analysis of variance of satisfaction components (LiSat scale items) from age and the sexual behaviors. For both sexes, all aspects of satisfaction were associated with simultaneous PVI orgasm consistency and with PVI frequency (except female life satisfaction). All aspects of satisfaction were also associated with vaginal orgasm consistency. Multivariate analyses indicated that PVI frequency and simultaneous orgasm consistency make independent contributions to the aspects of satisfaction for both sexes. For both sexes, PVI frequency and simultaneous orgasm produced by PVI (as well as vaginal orgasm for women) are associated with greater life, sexual, partnership, and mental health satisfaction. Greater support for these specific aspects of sexual activity is warranted. © 2010 International Society for Sexual Medicine.

  16. Getter materials for cracking ammonia

    DOEpatents

    Boffito, Claudio; Baker, John D.

    1999-11-02

    A method is provided for cracking ammonia to produce hydrogen. The method includes the steps of passing ammonia over an ammonia-cracking catalyst which is an alloy including (1) alloys having the general formula Zr.sub.1-x Ti.sub.x M.sub.1 M.sub.2, wherein M.sub.1 and M.sub.2 are selected independently from the group consisting of Cr, Mn, Fe, Co, and Ni, and x is between about 0.0 and about 1.0 inclusive; and between about 20% and about 50% Al by weight. In another aspect, the method of the invention is used to provide methods for operating hydrogen-fueled internal combustion engines and hydrogen fuel cells. In still another aspect, the present invention provides a hydrogen-fueled internal combustion engine and a hydrogen fuel cell including the above-described ammonia-cracking catalyst.

  17. Functional Wigner representation of quantum dynamics of Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Opanchuk, B.; Drummond, P. D.

    2013-04-01

    We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such as quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.

  18. Alternate methods for high level pyrotechnic shock simulation

    NASA Astrophysics Data System (ADS)

    Gray, Phillip J., Sr.

    Two effective methods to recreate a realistic pyrotechnic shock are presented. The first method employs a resonant beam and is used for SRS levels of 12,000 G or more. The test unit is at one end of the beam and a hammer strikes the opposite end causing a shock to be transmitted to the other end of the fixture. The second method is based on a standard shaker system with a resonant beam to amplify the input signal. The engineer defines the duration of the shock signal induced to the vibration amplifier using the GenRad 2514 controller. The shock signal is then input via the shaker to the resonant beam, which amplifies the signal to produce the desired response at the end of the fixture. The shock response spectrum stays within a +/-6 dB tolerance with levels as high as 3000 G peak. These methods are repeatable, reliable, cost-effective, and consistent with a real pyroevent.

  19. Laser notching ceramics for reliable fracture toughness testing

    DOE PAGES

    Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...

    2015-09-19

    A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less

  20. Combined electron-beam and coagulation purification of molasses distillery slops. Features of the method, technical and economic evaluation of large-scale facility

    NASA Astrophysics Data System (ADS)

    Pikaev, A. K.; Ponomarev, A. V.; Bludenko, A. V.; Minin, V. N.; Elizar'eva, L. M.

    2001-04-01

    The paper summarizes the results obtained from the study on combined electron-beam and coagulation method for purification of molasses distillery slops from distillery produced ethyl alcohol by fermentation of grain, potato, beet and some other plant materials. The method consists in preliminary mixing of industrial wastewater with municipal wastewater, electron-beam treatment of the mixture and subsequent coagulation. Technical and economic evaluation of large-scale facility (output of 7000 m 3 day -1) with two powerful cascade electron accelerators (total maximum beam power of 400 kW) for treatment of the wastewater by the above method was carried out. It was calculated that the cost of purification of the wastes is equal to 0.25 US$ m -3 that is noticeably less than in the case of the existing method.

  1. Discrete conservation laws and the convergence of long time simulations of the mkdv equation

    NASA Astrophysics Data System (ADS)

    Gorria, C.; Alejo, M. A.; Vega, L.

    2013-02-01

    Pseudospectral collocation methods and finite difference methods have been used for approximating an important family of soliton like solutions of the mKdV equation. These solutions present a structural instability which make difficult to approximate their evolution in long time intervals with enough accuracy. The standard numerical methods do not guarantee the convergence to the proper solution of the initial value problem and often fail by approaching solutions associated to different initial conditions. In this frame the numerical schemes that preserve the discrete invariants related to some conservation laws of this equation produce better results than the methods which only take care of a high consistency order. Pseudospectral spatial discretization appear as the most robust of the numerical methods, but finite difference schemes are useful in order to analyze the rule played by the conservation of the invariants in the convergence.

  2. An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value

    NASA Astrophysics Data System (ADS)

    Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu

    2018-03-01

    Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.

  3. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  4. A study of cell electrophoresis as a means of purifying growth hormone secreting cells

    NASA Technical Reports Server (NTRS)

    Plank, Lindsay D.; Hymer, W. C.; Kunze, M. Elaine; Marks, Gary M.; Lanham, J. Wayne

    1983-01-01

    Growth hormone secreting cells of the rat anterior pituitary are heavily laden with granules of growth hormone and can be partialy purified on the basis of their resulting high density. Two methods of preparative cell electrophoresis were investigated as methods of enhancing the purification of growth hormone producing cells: density gradient electrophoresis and continuous flow electrophoresis. Both methods provided a two- to four-fold enrichment in growth hormone production per cell relative to that achieved by previous methods. Measurements of electrophoretic mobilities by two analytical methods, microscopic electrophoresis and laser-tracking electrophoresis, revealed very little distinction between unpurified anterior pituitary cell suspensions and somatotroph-enriched cell suspensions. Predictions calculated on the basis of analytical electrophoretic data are consistent with the hypothesis that sedimentation plays a significant role in both types of preparative electrophoresis and the electrophoretic mobility of the growth hormone secreting subpopulation of cells remains unknown.

  5. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nur, Adrian; Rahmawati, Alifah; Ilmi, Noor Izzati

    Synthesis of nanosized of hydroxyapatite (HA) by electrochemical pulsed direct current (PDC) method has been studied. The aim of this work is to study the influence of various PDC parameters (pH initial, electrode distance, duty cycle, frequency, and amplitude) on particle surface area of HA powders. The electrochemical synthesis was prepared in solution Ca{sup 2+}/EDTA{sup 4−}/PO{sub 4}{sup 3+} at concentration 0.25/0.25/0.15 M for 24 h. The electrochemical cell was consisted of two carbon rectangular electrodes connected to a function generator to produce PDC. There were two treatments for particles after electrosynthesized, namely without aging and aged for 2 days atmore » 40 °C. For both cases, the particles were filtered and washed by demineralized water to eliminate the impurities and unreacted reactants. Then, the particles were dried at 100 °C for 2 days. The dried particles were characterized by X-ray diffraction, surface area analyzer, scanning electron microscopy (SEM), Fourier transform infrared spectra and thermogravimetric and differential thermal analysis. HA particles can be produced when the initial pH > 6. The aging process has significant effect on the produced HA particles. SEM images of HA particles showed that the powders consisted of agglomerates composed of fine crystallites and have morphology plate-like and sphere. The surface area of HA particles is in the range of 25 – 91 m{sup 2}/g. The largest particle surface area of HA was produced at 4 cm electrode distance, 80% cycle duty, frequency 0.1 Hz, amplitude 9 V and with aging process.« less

  7. Production of beta-gamma coincidence spectra of individual radioxenon isotopes for improved analysis of nuclear explosion monitoring data

    NASA Astrophysics Data System (ADS)

    Haas, Derek Anderson

    Radioactive xenon gas is a fission product released in the detonation of nuclear devices that can be detected in atmospheric samples far from the detonation site. In order to improve the capabilities of radioxenon detection systems, this work produces beta-gamma coincidence spectra of individual isotopes of radioxenon. Previous methods of radioxenon production consisted of the removal of mixed isotope samples of radioxenon gas released from fission of contained fissile materials such as 235U. In order to produce individual samples of the gas, isotopically enriched stable xenon gas is irradiated with neutrons. The detection of the individual isotopes is also modeled using Monte Carlo simulations to produce spectra. The experiment shows that samples of 131mXe, 133 Xe, and 135Xe with a purity greater than 99% can be produced, and that a sample of 133mXe can be produced with a relatively low amount of 133Xe background. These spectra are compared to models and used as essential library data for the Spectral Deconvolution Analysis Tool (SDAT) to analyze atmospheric samples of radioxenon for evidence of nuclear events.

  8. Comparison of 3 Methods to Assess Urine Specific Gravity in Collegiate Wrestlers.

    PubMed

    Stuempfle, Kristin J.; Drury, Daniel G.

    2003-12-01

    OBJECTIVE: To investigate the reliability and validity of refractometry, hydrometry, and reagent strips in assessing urine specific gravity in collegiate wrestlers. DESIGN AND SETTING: We assessed the reliability of refractometry, hydrometry, and reagent strips between 2 trials and among 4 testers. The validity of hydrometry and reagent strips was assessed by comparison with refractometry, the criterion measure for urine specific gravity. SUBJECTS: Twenty-one National Collegiate Athletic Association Division III collegiate wrestlers provided fresh urine samples. MEASUREMENTS: Four testers measured the specific gravity of each urine sample 6 times: twice by refractometry, twice by hydrometry, and twice by reagent strips. RESULTS: Refractometer measurements were consistent between trials (R =.998) and among testers; hydrometer measurements were consistent between trials (R =.987) but not among testers; and reagent-strip measurements were not consistent between trials or among testers. Hydrometer (1.018 +/- 0.006) and reagent-strip (1.017 +/- 0.007) measurements were significantly higher than refractometer (1.015 +/- 0.006) measurements. Intraclass correlation coefficients were moderate between refractometry and hydrometry (R =.869) and low between refractometry and reagent strips (R =.573). The hydrometer produced 28% false positives and 2% false negatives, and reagent strips produced 15% false positives and 9% false negatives. CONCLUSIONS: Only the refractometer should be used to determine urine specific gravity in collegiate wrestlers during the weight-certification process.

  9. Comparison of 3 Methods to Assess Urine Specific Gravity in Collegiate Wrestlers

    PubMed Central

    Drury, Daniel G.

    2003-01-01

    Objective: To investigate the reliability and validity of refractometry, hydrometry, and reagent strips in assessing urine specific gravity in collegiate wrestlers. Design and Setting: We assessed the reliability of refractometry, hydrometry, and reagent strips between 2 trials and among 4 testers. The validity of hydrometry and reagent strips was assessed by comparison with refractometry, the criterion measure for urine specific gravity. Subjects: Twenty-one National Collegiate Athletic Association Division III collegiate wrestlers provided fresh urine samples. Measurements: Four testers measured the specific gravity of each urine sample 6 times: twice by refractometry, twice by hydrometry, and twice by reagent strips. Results: Refractometer measurements were consistent between trials (R = .998) and among testers; hydrometer measurements were consistent between trials (R = .987) but not among testers; and reagent-strip measurements were not consistent between trials or among testers. Hydrometer (1.018 ± 0.006) and reagent-strip (1.017 ± 0.007) measurements were significantly higher than refractometer (1.015 ± 0.006) measurements. Intraclass correlation coefficients were moderate between refractometry and hydrometry (R = .869) and low between refractometry and reagent strips (R = .573). The hydrometer produced 28% false positives and 2% false negatives, and reagent strips produced 15% false positives and 9% false negatives. Conclusions: Only the refractometer should be used to determine urine specific gravity in collegiate wrestlers during the weight-certification process. PMID:14737213

  10. [Consistency and firmness in rearing practices of parents of adolescent drug addicts].

    PubMed

    Dukanović, Boro

    2007-01-01

    This paper deals with the increasingly popular theoretical approaches to drug addiction as a "family disease". Many theories have appeared claming that ethiopathogenic role related to the family environment is of crucial importance in juvenikle drug consumption. The main objective is to understand, trough consideration of one part of family dynamics, the etiological potential of the juvenile drug addiction. This work focuses only on certain parts of parents' upbringing and variations in their upbringing practices, namely their consistency and strictness, which has been shown to reflect in disfucntional family systems producing juvenile drug addicts. The author cosiders two essential aspect of the upbringing practice and, by using relevant evaluation tools of these standarts, attempts to measure their importance in relation to adolescents who suffer of drug addiction (experimental group) and those who never had any contact with narcotics (control group). A part of the EMBU questionaire has been applied, which is the adolescent's memory of his/her parents' upbringing practices in the childhood and pre-juvenile period. Only two issues, which deal with the parents' firmness and consistency, have been selected as results of this method. Examination has been carried out individualy and obtained data analized by the chi square test. The results indicate distinctive differences with respect to the consistency and strictness of the upbringing methods of drug addict in comparison to control group of the examined juvenile.

  11. Increased scientific rigor will improve reliability of research and effectiveness of management

    USGS Publications Warehouse

    Sells, Sarah N.; Bassing, Sarah B.; Barker, Kristin J.; Forshee, Shannon C.; Keever, Allison; Goerz, James W.; Mitchell, Michael S.

    2018-01-01

    Rigorous science that produces reliable knowledge is critical to wildlife management because it increases accurate understanding of the natural world and informs management decisions effectively. Application of a rigorous scientific method based on hypothesis testing minimizes unreliable knowledge produced by research. To evaluate the prevalence of scientific rigor in wildlife research, we examined 24 issues of the Journal of Wildlife Management from August 2013 through July 2016. We found 43.9% of studies did not state or imply a priori hypotheses, which are necessary to produce reliable knowledge. We posit that this is due, at least in part, to a lack of common understanding of what rigorous science entails, how it produces more reliable knowledge than other forms of interpreting observations, and how research should be designed to maximize inferential strength and usefulness of application. Current primary literature does not provide succinct explanations of the logic behind a rigorous scientific method or readily applicable guidance for employing it, particularly in wildlife biology; we therefore synthesized an overview of the history, philosophy, and logic that define scientific rigor for biological studies. A rigorous scientific method includes 1) generating a research question from theory and prior observations, 2) developing hypotheses (i.e., plausible biological answers to the question), 3) formulating predictions (i.e., facts that must be true if the hypothesis is true), 4) designing and implementing research to collect data potentially consistent with predictions, 5) evaluating whether predictions are consistent with collected data, and 6) drawing inferences based on the evaluation. Explicitly testing a priori hypotheses reduces overall uncertainty by reducing the number of plausible biological explanations to only those that are logically well supported. Such research also draws inferences that are robust to idiosyncratic observations and unavoidable human biases. Offering only post hoc interpretations of statistical patterns (i.e., a posteriorihypotheses) adds to uncertainty because it increases the number of plausible biological explanations without determining which have the greatest support. Further, post hocinterpretations are strongly subject to human biases. Testing hypotheses maximizes the credibility of research findings, makes the strongest contributions to theory and management, and improves reproducibility of research. Management decisions based on rigorous research are most likely to result in effective conservation of wildlife resources. 

  12. Process for producing a clean hydrocarbon fuel from high calcium coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kindig, J.K.

    A method is described for substantially reducing the amount of at least one insoluble fluoride-forming species selected from the group consisting of Group IA species and Group IIA species. The species is present in a coal feed material comprising: forming a slurry of a coal feed; a fluoride acid in an amount to produce a first molar concentration of free-fluoride-ions; at least one fluoride-complexing species, the total of all fluoride-complexing species in the slurry being present in an amount to produce a second molar concentration, the second molar concentration being at least equal to that amount such that the ratiomore » of the first molar concentration to the second molar concentration is substantially equal to the stoichiometric ratio of fluoride in at least one tightly-bound complexion so as to from tightly-bound complexions with substantially all free-fluoride ions in the slurry to produce a leached coal product and a spent leach liquor; and separating the leached coal product from the spent leach liquor.« less

  13. A new statistical distance scale for planetary nebulae

    NASA Astrophysics Data System (ADS)

    Ali, Alaa; Ismail, H. A.; Alsolami, Z.

    2015-05-01

    In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.

  14. Processing and mechanical characterization of alumina laminates

    NASA Astrophysics Data System (ADS)

    Montgomery, John K.

    2002-08-01

    Single-phase ceramics that combine property gradients or steps in monolithic bodies are sought as alternatives to ceramic composites made of dissimilar materials. This work describes novel processing methods to produce stepped-density (or laminated) alumina single-phase bodies that maintain their mechanical integrity. One arrangement consists of a stiff, dense bulk material with a thin, flaw tolerant, porous exterior layer. Another configuration consists of a lightweight, low-density bulk material with a thin, hard, wear resistant exterior layer. Alumina laminates with strong interfaces have been successfully produced in this work using two different direct-casting processes. Gelcasting is a useful near-net shape processing technique that has been combined with several techniques, such as reaction bonding of aluminum oxide and the use of starch as a fugative filler, to successfully produced stepped-density alumina laminates. The other direct casting process that has been developed in this work is thermoreversible gelcasting (TRG). This is a reversible gelation process that has been used to produce near-net shape dense ceramic bodies. Also, individual layers can be stacked together and heated to produce laminates. Bilayer laminate samples were produced with varied thickness of porous and dense layers. It was shown that due to the difference in modulus and hardness, transverse cracking is found upon Hertzian contact when the dense layer is on the exterior. In the opposite arrangement, compacted damage zones formed in the porous material and no damage occurred in the underlying dense layer. Flaw tolerant behavior of the porous exterior/dense underlayer was examined by measuring biaxial strength as a function of Vickers indentation load. It was found that the thinnest layer of porous material results in the greatest flaw tolerance. Also, higher strength was exhibited at large indentation loads when compared to dense monoliths. The calculated stresses on the surfaces and interface afforded an explanation of the behavior that failure initiates at the interface between the layers for the thinnest configuration, rather than the sample surface.

  15. Ocular vestibular evoked myogenic potentials elicited with vibration applied to the teeth.

    PubMed

    Parker-George, Jennifer C; Bell, Steven L; Griffin, Michael J

    2016-01-01

    This study investigated whether the method for eliciting vibration-induced oVEMPs could be improved by applying vibration directly to the teeth, and how vibration-induced oVEMP responses depend on the duration of the applied vibration. In 10 participants, a hand-held shaker was used to present 100-Hz vibration tone pips to the teeth via a customised bite-bar or to other parts of the head. oVEMP potentials were recorded in response to vibration in three orthogonal directions and five stimulus durations (10-180 ms). The oVEMP responses were analysed in terms of the peak latency onset, peak-to-peak amplitude, and the quality of the trace. Vibration applied to the teeth via the bite-bar produced oVEMPs that were more consistent, of higher quality and of greater amplitude than those evoked by vibration applied to the head. Longer duration stimuli produced longer duration oVEMP responses. One cycle duration stimuli produced responses that were smaller in amplitude and lower quality than the longer stimulus durations. Application of vibration via the teeth using a bite-bar is an effective means of producing oVEMPs. A 1-cycle stimulus is not optimal to evoke an oVEMP because it produces less robust responses than those of longer stimulus duration. A positive relationship between the duration of the stimulus and the response is consistent with the notion that the vibration-induced oVEMP is an oscillatory response to the motion of the head, rather than being a simple reflex response that occurs when the stimulus exceeds a threshold level of stimulation. Applying acceleration to the teeth through a bite-bar elicits clearer oVEMP responses than direct application to other parts of the head and has potential to improve clinical measurements. A 100-Hz 1-cycle stimulus produces less robust oVEMP responses than longer 100-Hz stimuli. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation

    DOE PAGES

    Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; ...

    2016-11-24

    Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less

  17. Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.

    Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less

  18. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  19. Microfluidic channel fabrication method

    DOEpatents

    Arnold, Don W.; Schoeniger, Joseph S.; Cardinale, Gregory F.

    2001-01-01

    A new channel structure for microfluidic systems and process for fabricating this structure. In contrast to the conventional practice of fabricating fluid channels as trenches or grooves in a substrate, fluid channels are fabricated as thin walled raised structures on a substrate. Microfluidic devices produced in accordance with the invention are a hybrid assembly generally consisting of three layers: 1) a substrate that can or cannot be an electrical insulator; 2) a middle layer, that is an electrically conducting material and preferably silicon, forms the channel walls whose height defines the channel height, joined to and extending from the substrate; and 3) a top layer, joined to the top of the channels, that forms a cover for the channels. The channels can be defined by photolithographic techniques and are produced by etching away the material around the channel walls.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wertsching, Alan Kevin; Trantor, Troy Joseph; Ebner, Matthias Anthony

    A method and device for producing secure, high-density tritium bonded with carbon. A substrate comprising carbon is provided. A precursor is intercalated between carbon in the substrate. The precursor intercalated in the substrate is irradiated until at least a portion of the precursor, preferably a majority of the precursor, is transmutated into tritium and bonds with carbon of the substrate forming bonded tritium. The resulting bonded tritium, tritium bonded with carbon, produces electrons via beta decay. The substrate is preferably a substrate from the list of substrates consisting of highly-ordered pyrolytic graphite, carbon fibers, carbon nanotunes, buckministerfullerenes, and combinations thereof.more » The precursor is preferably boron-10, more preferably lithium-6. Preferably, thermal neutrons are used to irradiate the precursor. The resulting bonded tritium is preferably used to generate electricity either directly or indirectly.« less

  1. β-Lactamases in amoxicillin-clavulanate-resistant Escherichia coli strains isolated from a Chinese tertiary hospital.

    PubMed

    Ding, Juanjuan; Ma, Xitao; Chen, Zhuochang; Feng, Keqing

    2013-08-01

    A total of 52 strains were resistant to amoxicillin-clavulanate by disk diffusion method in a Chinese tertiary hospital from July 2011 to December 2011. Among these isolates, 2 isolates possessed a phenotype consistent with production of inhibitor-resistant temoniera (TEM) (IRT) β-lactamase, and the TEM-type gene was cloned into strains of Escherichia coli JM109 cells. Both had no blaTEM mutations and were identified as TEM-1 β-lactamase producers. As a result, no IRT β-lactamase was detected. Multiplex PCR detected most of these strains produced TEM-1 enzymes, and plasmid-mediated AmpC β-lactamase and oxacillinase-1 β-lactamases are important mechanisms of resistance as well. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Nanoparticles and nanorods of silicon carbide from the residues of corn

    NASA Astrophysics Data System (ADS)

    Qadri, S. B.; Gorzkowski, E.; Rath, B. B.; Feng, J.; Qadri, S. N.; Kim, H.; Caldwell, J. D.; Imam, M. A.

    2015-01-01

    We have investigated the thermally induced transformation of various residues of the corn plant into nanoparticles and nanorods of different silicon carbide (SiC) polytypes. This has been accomplished by both microwave-induced and conventional furnace pyrolysis in excess of 1450 °C in an inert atmosphere. This simple process of producing nanoparticles of different polytypes of SiC from the corn plant opens a new method of utilizing agricultural waste to produce viable industrial products that are technologically important for nanoelectronics, molecular sensors, nanophotonics, biotechnology, and other mechanical applications. Using x-ray and Raman scattering characterization, we have demonstrated that the processed samples of corn husk, leaves, stalks, and cob consist of SiC nanostructures of the 2H, 3C, 4H, and 6H polytypes.

  3. Denudation rates determined from the accumulation of in situ-produced 10Be in the luquillo experimental forest, Puerto Rico

    USGS Publications Warehouse

    Brown, Erik Thorson; Stallard, Robert F.; Larsen, Matthew C.; Raisbeck, Grant M.; Yiou, Francoise

    1995-01-01

    We present a simple method for estimation of long-term mean denudation rates using in situ-produced cosmogenic 10Be in fluvial sediments. Procedures are discussed to account for the effects of soil bioturbation, mass wasting and attenuation of cosmic rays by biomass and by local topography. Our analyses of 10Be in quartz from bedrock outcrops, soils, mass-wasting sites and riverine sediment from the Icacos River basin in the Luquillo Experimental Forest, Puerto Rico, are used to characterize denudation for major landform elements in that basin. The 10Be concentration of a discharge-weighted average of size classes of river sediment corresponds to a long-term average denudation of ≈ 43 m Ma −1, consistent with mass balance results. 

  4. Production of dissolvable microneedles using an atomised spray process: effect of microneedle composition on skin penetration.

    PubMed

    McGrath, Marie G; Vucen, Sonja; Vrdoljak, Anto; Kelly, Adam; O'Mahony, Conor; Crean, Abina M; Moore, Anne

    2014-02-01

    Dissolvable microneedles offer an attractive delivery system for transdermal drug and vaccine delivery. They are most commonly formed by filling a microneedle mold with liquid formulation using vacuum or centrifugation to overcome the constraints of surface tension and solution viscosity. Here, we demonstrate a novel microneedle fabrication method employing an atomised spray technique that minimises the effects of the liquid surface tension and viscosity when filling molds. This spray method was successfully used to fabricate dissolvable microneedles (DMN) from a wide range of sugars (trehalose, fructose and raffinose) and polymeric materials (polyvinyl alcohol, polyvinylpyrrolidone, carboxymethylcellulose, hydroxypropylmethylcellulose and sodium alginate). Fabrication by spraying produced microneedles with amorphous content using single sugar compositions. These microneedles displayed sharp tips and had complete fidelity to the master silicon template. Using a method to quantify the consistency of DMN penetration into different skin layers, we demonstrate that the material of construction significantly influenced the extent of skin penetration. We demonstrate that this spraying method can be adapted to produce novel laminate-layered as well as horizontally-layered DMN arrays. To our knowledge, this is the first report documenting the use of an atomising spray, at ambient, mild processing conditions, to create dissolvable microneedle arrays that can possess novel, laminate layering. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. An evidence-based patient-centered method makes the biopsychosocial model scientific.

    PubMed

    Smith, Robert C; Fortin, Auguste H; Dwamena, Francesca; Frankel, Richard M

    2013-06-01

    To review the scientific status of the biopsychosocial (BPS) model and to propose a way to improve it. Engel's BPS model added patients' psychological and social health concerns to the highly successful biomedical model. He proposed that the BPS model could make medicine more scientific, but its use in education, clinical care, and, especially, research remains minimal. Many aver correctly that the present model cannot be defined in a consistent way for the individual patient, making it untestable and non-scientific. This stems from not obtaining relevant BPS data systematically, where one interviewer obtains the same information another would. Recent research by two of the authors has produced similar patient-centered interviewing methods that are repeatable and elicit just the relevant patient information needed to define the model at each visit. We propose that the field adopt these evidence-based methods as the standard for identifying the BPS model. Identifying a scientific BPS model in each patient with an agreed-upon, evidence-based patient-centered interviewing method can produce a quantum leap ahead in both research and teaching. A scientific BPS model can give us more confidence in being humanistic. In research, we can conduct more rigorous studies to inform better practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL

    DOE PAGES

    He, Lilin; Do, Changwoo; Qian, Shuo; ...

    2014-11-25

    Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less

  7. Effects of muffin processing on fumonisins from 14C-labeled toxins produced in cultured corn kernels.

    PubMed

    Avantaggiato, Giuseppina; De La Campa, Regina; Miller, J David; Visconti, Angelo

    2003-10-01

    The persistence of fumonisins during cooking is known to be affected by several factors, including thermal degradation and the presence of various ingredients in corn-based food recipes that can react with the toxin. A method for the production of corn kernels containing 14C-fumonisins was developed. The corn kernels were colonized by Fusarium verticillioides MRC 826 and supplemented with 1,2-14C-sodium acetate. The specific activity of 14C-FB1 produced made the study of its fate in cornmeal muffins possible. The double-extraction acetonitrile-water-methanol/immunoaffinity column/o-phthaldialdehyde high-performance liquid chromatography (HPLC) method was used to determine FB1 levels in cornmeal muffins. Reductions in FB1 levels in muffins spiked with 14C-labeled and unlabeled FB1 (43 and 48%, respectively) were similar, indicating that the extraction method was efficient and consistent with previous reports. However, with the labeled corn kernel material, recovery levels based on the 14C counts for the eluate from an immunoaffinity column were much higher (90%). This finding indicates that some fumonisin-related compounds other than FB1 that were present in the cornmeal were recognized by the antibodies but not by the HPLC method.

  8. Short communication: Development of a rapid laboratory method to polymerize lactose to nondigestible carbohydrates.

    PubMed

    Kuechel, A F; Schoenfuss, T C

    2018-04-01

    Nondigestible carbohydrates with a degree of polymerization between 3 and 10 (oligosaccharides) are commonly used as dietary fiber ingredients in the food industry, once they have been confirmed to have positive effects on human health by regulatory authorities. These carbohydrates are produced through chemical or enzymatic synthesis. Polylactose, a polymerization product of lactose and glucose, has been produced by reactive extrusion using a twin-screw extruder, with citric acid as the catalyst. Trials using powdered cheese whey permeate as the lactose source for this reaction were unsuccessful. The development of a laboratory method was necessary to investigate the effect of ingredients present in permeate powder that could be inhibiting polymerization. A Mars 6 Microwave Digestion System (CEM Corp., Matthews, NC) was used to heat and polymerize the sugars. The temperatures had to be lowered from extrusion conditions to produce a caramel-like product and not decompose the sugars. Small amounts of water had to be added to the reaction vessels to allow consistent heating of sugars between vessels. Elevated levels of water (22.86 and 28.57%, vol/wt) and calcium phosphate (0.928 and 1.856%, wt/wt) reduced the oligosaccharide yield in the laboratory method. Increasing the citric acid (catalyst) concentration increased the oligosaccharide yield for the pure sugar blend and when permeate powder was used. The utility of the laboratory method to predict oligosaccharide yields was confirmed during extrusion trials of permeate when this increased acid catalyst concentration resulted in similar oligosaccharide concentrations. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Production of al-si alloy feedstocks using the solvent hot mixing method

    NASA Astrophysics Data System (ADS)

    Ni, J. Q.; Han, K. Q.; Yu, M. H.

    2018-05-01

    Powder injection molding is a promising low-cost technique for net shape processing of metal and ceramic components. This study aimed to investigate a new method for preparing aluminium (Al) – silicon (Si) alloy feedstock using the solvent hot mixing process. For this purpose, micron-sized Al-Si (20 wt. %) alloy powder was mixed with a binder consisting of 55 wt. % carnauba wax, 45 wt. % high-density polyethylene, and 3 wt. % stearic acid in a hot xylene bath. The scanning electron microscopy technique, thermogravimetric analysis, density measurement and torque measurements were used to verify the homogeneity of the feedstock. Moreover, the feedstock was chosen to perform the molding, debinding cycle and sintering. An Al-Si (20 wt. %) alloy part was successfully produced using this new method.

  10. Decompositions of large-scale biological systems based on dynamical properties.

    PubMed

    Soranzo, Nicola; Ramezani, Fahimeh; Iacono, Giovanni; Altafini, Claudio

    2012-01-01

    Given a large-scale biological network represented as an influence graph, in this article we investigate possible decompositions of the network aimed at highlighting specific dynamical properties. The first decomposition we study consists in finding a maximal directed acyclic subgraph of the network, which dynamically corresponds to searching for a maximal open-loop subsystem of the given system. Another dynamical property investigated is strong monotonicity. We propose two methods to deal with this property, both aimed at decomposing the system into strongly monotone subsystems, but with different structural characteristics: one method tends to produce a single large strongly monotone component, while the other typically generates a set of smaller disjoint strongly monotone subsystems. Original heuristics for the methods investigated are described in the article. altafini@sissa.it

  11. GHz laser-free time-resolved transmission electron microscopy: A stroboscopic high-duty-cycle method

    DOE PAGES

    Qiu, Jiaqi; Zhu, Yimei; Ha, Gwanghui; ...

    2015-11-10

    In this study, a device and a method for producing ultrashort electron pulses with GHz repetition rates via pulsing an input direct current (dc) electron beam are provided. The device and the method are based on an electromagnetic-mechanical pulser (EMMP) that consists of a series of transverse deflecting cavities and magnetic quadrupoles. The EMMP modulates and chops the incoming dc electron beam and converts it into pico- and sub-pico-second electron pulse sequences (pulse trains) at >1 GHz repetition rates, as well as controllably manipulates the resulting pulses. Ultimately, it leads to negligible electron pulse phase-space degradation compared to the incomingmore » dc beam parameters. The temporal pulse length and repetition rate for the EMMP can be continuously tunable over wide ranges.« less

  12. Cold isopressing method

    DOEpatents

    Chen, Jack C.; Stawisuck, Valerie M.; Prasad, Ravi

    2003-01-01

    A cold isopressing method in which two or more layers of material are formed within an isopressing mold. One of the layers consists of a tape-cast film. The layers are isopressed within the isopressing mold, thereby to laminate the layers and to compact the tape-cast film. The isopressing mold can be of cylindrical configuration with the layers being coaxial cylindrical layers. The materials used in forming the layers can contain green ceramic materials and the resultant structure can be fired and sintered as necessary and in accordance with known methods to produce a finished composite, ceramic structure. Further, such green ceramic materials can be of the type that are capable of conducting hydrogen or oxygen ions at high temperature with the object of utilizing the finished composite ceramic structure as a ceramic membrane element.

  13. Automated Quantification of Arbitrary Arm-Segment Structure in Spiral Galaxies

    NASA Astrophysics Data System (ADS)

    Davis, Darren Robert

    This thesis describes a system that, given approximately-centered images of spiral galaxies, produces quantitative descriptions of spiral galaxy structure without the need for per-image human input. This structure information consists of a list of spiral arm segments, each associated with a fitted logarithmic spiral arc and a pixel region. This list-of-arcs representation allows description of arbitrary spiral galaxy structure: the arms do not need to be symmetric, may have forks or bends, and, more generally, may be arranged in any manner with a consistent spiral-pattern center (non-merging galaxies have a sufficiently well-defined center). Such flexibility is important in order to accommodate the myriad structure variations observed in spiral galaxies. From the arcs produced from our method it is possible to calculate measures of spiral galaxy structure such as winding direction, winding tightness, arm counts, asymmetry, or other values of interest (including user-defined measures). In addition to providing information about the spiral arm "skeleton" of each galaxy, our method can enable analyses of brightness within individual spiral arms, since we provide the pixel regions associated with each spiral arm segment. For winding direction, arm tightness, and arm count, comparable information is available (to various extents) from previous efforts; to the extent that such information is available, we find strong correspondence with our output. We also characterize the changes to (and invariances in) our output as a function of modifications to important algorithm parameters. By enabling generation of extensive data about spiral galaxy structure from large-scale sky surveys, our method will enable new discoveries and tests regarding the nature of galaxies and the universe, and will facilitate subsequent work to automatically fit detailed brightness models of spiral galaxies.

  14. Surface Dielectric Barrier Discharge Jet for Skin Disinfection

    NASA Astrophysics Data System (ADS)

    Creyghton, Yves; Meijer, Rogier; Verweij, Paul; van der Zanden, Frank; Leenders, Paul

    A consortium consisting of the research institute TNO, the medical ­university and hospital St Radboud and two industrial enterprises is working on a non-thermal plasma treatment method for hand disinfection. The group is seeking for cooperation, in particular in the field of validation methods and potential ­standardization for plasma based disinfection procedures. The present paper describes technical progress in plasma source development together with initial microbiological data. Particular properties of the sheet shaped plasma volume are the possibility of treating large irregular surfaces in a short period of time, effective plasma produced species transfer to the surface together with high controllability of the nature of plasma species by means of temperature conditioning.

  15. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  16. Silicon microfabricated beam expander

    NASA Astrophysics Data System (ADS)

    Othman, A.; Ibrahim, M. N.; Hamzah, I. H.; Sulaiman, A. A.; Ain, M. F.

    2015-03-01

    The feasibility design and development methods of silicon microfabricated beam expander are described. Silicon bulk micromachining fabrication technology is used in producing features of the structure. A high-precision complex 3-D shape of the expander can be formed by exploiting the predictable anisotropic wet etching characteristics of single-crystal silicon in aqueous Potassium-Hydroxide (KOH) solution. The beam-expander consist of two elements, a micromachined silicon reflector chamber and micro-Fresnel zone plate. The micro-Fresnel element is patterned using lithographic methods. The reflector chamber element has a depth of 40 µm, a diameter of 15 mm and gold-coated surfaces. The impact on the depth, diameter of the chamber and absorption for improved performance are discussed.

  17. The use of power tools in the insertion of cortical bone screws.

    PubMed

    Elliott, D

    1992-01-01

    Cortical bone screws are commonly used in fracture surgery, most patterns are non-self-tapping and require a thread to be pre-cut. This is traditionally performed using hand tools rather than their powered counterparts. Reasons given usually imply that power tools are more dangerous and cut a less precise thread, but there is no evidence to support this supposition. A series of experiments has been performed which show that the thread pattern cut with either method is identical and that over-penetration with the powered tap is easy to control. The conclusion reached is that both methods produce consistently reliable results but use of power tools is much faster.

  18. Single-reactor process for producing liquid-phase organic compounds from biomass

    DOEpatents

    Dumesic, James A.; Simonetti, Dante A.; Kunkes, Edward L.

    2015-12-08

    Disclosed is a method for preparing liquid fuel and chemical intermediates from biomass-derived oxygenated hydrocarbons. The method includes the steps of reacting in a single reactor an aqueous solution of a biomass-derived, water-soluble oxygenated hydrocarbon reactant, in the presence of a catalyst comprising a metal selected from the group consisting of Cr, Mn, Fe, Co, Ni, Cu, Mo, Tc, Ru, Rh, Pd, Ag, W, Re, Os, Ir, Pt, and Au, at a temperature, and a pressure, and for a time sufficient to yield a self-separating, three-phase product stream comprising a vapor phase, an organic phase containing linear and/or cyclic mono-oxygenated hydrocarbons, and an aqueous phase.

  19. Single-reactor process for producing liquid-phase organic compounds from biomass

    DOEpatents

    Dumesic, James A [Verona, WI; Simonetti, Dante A [Middleton, WI; Kunkes, Edward L [Madison, WI

    2011-12-13

    Disclosed is a method for preparing liquid fuel and chemical intermediates from biomass-derived oxygenated hydrocarbons. The method includes the steps of reacting in a single reactor an aqueous solution of a biomass-derived, water-soluble oxygenated hydrocarbon reactant, in the presence of a catalyst comprising a metal selected from the group consisting of Cr, Mn, Fe, Co, Ni, Cu, Mo, Tc, Ru, Rh, Pd, Ag, W, Re, Os, Ir, Pt, and Au, at a temperature, and a pressure, and for a time sufficient to yield a self-separating, three-phase product stream comprising a vapor phase, an organic phase containing linear and/or cyclic mono-oxygenated hydrocarbons, and an aqueous phase.

  20. Reduction of Unsteady Forcing in a Vaned, Contra-Rotating Transonic Turbine Configuration

    NASA Technical Reports Server (NTRS)

    Clark, John

    2010-01-01

    HPT blade unsteadiness in the presence of a downstream vane consistent with contra-rotation is characterized by strong interaction at the first harmonic of downstream vane passing. E An existing stage-and-one-half transonic turbine rig design was used as a baseline to investigate means of reducing such a blade-vane interaction. E Methods assessed included: Aerodynamic shaping of HPT blades 3D stacking of the downstream vane Steady pressure-side blowing E Of the methods assessed, a combination of vane bowing and steady pressure-side blowing produced the most favorable result. E Transonic turbine experiments are planned to assess predictive accuracy for the baseline turbine and any design improvements.

  1. A biochemical method for assessing the neurotoxic effects of misonidazole in the rat.

    PubMed Central

    Rose, G. P.; Dewar, A. J.; Stratford, I. J.

    1980-01-01

    A proven biochemical method for assessing chemically induced neurotoxicity has been applied to the study of the toxic effects of misonidazole (MISO) in the rat. This involves the fluorimetric measurement of beta-glucuronidase and beta-galactosidase activities in homogenates of rat nervous tissue. The tissues analysed were sciatic/posterior tibial nerve (SPTN) cut into 4 sections, trigeminal ganglia and cerebellum. MISO administered i.p. to Wistar rats in doses greater than 300 mg/kg/day for 7 consecutive days produced maximal increases in both beta-glucuronidase and beta-galactosidase activities in th SPTN at 4 weeks (140-180% of control values). The highest increases were associated with the most distal secretion of the nerve. Significant enzyme-activity changes were also found in the trigeminal ganglia and cerebellum of MISO-dosed rats. The greatest activity occurred 4-5 weeks after dosing, and was dose-related. It is concluded that, in the rat, MISO can produce biochemical changes consistent with a dying-back peripheral neuropathy, and biochemical changes suggestive of cerebellar damage. This biochemical approach would appear to offer a convenient quantitative method for the detection of neurotoxic effects of other potential radio-sensitizing drugs. PMID:7459223

  2. Near‐surface void detection using a seismic landstreamer and horizontal velocity and attenuation tomography

    USGS Publications Warehouse

    Buckley, Sean F.; Lane, John W.

    2012-01-01

    The detection and characterization of subsurface voids plays an important role in the study of karst formations and clandestine tunnels. Horizontal velocity and attenuation tomography (HVAT) using offset‐fan shooting and a towed seismic land streamer is a simple, rapid, minimally invasive method that shows promise for detecting near‐surface voids and providing information on the orientation of linear voids. HVAT surveys were conducted over a known subsurface steam tunnel on the University of Connecticut Depot Campus, Storrs, Connecticut. First‐arrival travel‐time and amplitude data were used to produce two‐dimensional (2D) horizontal (map view) velocity and attenuation tomograms. In addition, attenuation tomograms were produced based on normalized total trace energy (TTE). Both the velocity and TTE attenuation tomograms depict an anomaly consistent with the location and orientation of the known tunnel; the TTE method, however, requires significantly less processing time, and therefore may provide a path forward to semi‐automated, near real‐time detection of near‐surface voids. Further study is needed to assess the utility of the HVAT method to detect deeper voids and the effects of a more complex geology on HVAT results.

  3. Cross-borehole flowmeter tests for transient heads in heterogeneous aquifers.

    PubMed

    Le Borgne, Tanguy; Paillet, Frederick; Bour, Olivier; Caudal, Jean-Pierre

    2006-01-01

    Cross-borehole flowmeter tests have been proposed as an efficient method to investigate preferential flowpaths in heterogeneous aquifers, which is a major task in the characterization of fractured aquifers. Cross-borehole flowmeter tests are based on the idea that changing the pumping conditions in a given aquifer will modify the hydraulic head distribution in large-scale flowpaths, producing measurable changes in the vertical flow profiles in observation boreholes. However, inversion of flow measurements to derive flowpath geometry and connectivity and to characterize their hydraulic properties is still a subject of research. In this study, we propose a framework for cross-borehole flowmeter test interpretation that is based on a two-scale conceptual model: discrete fractures at the borehole scale and zones of interconnected fractures at the aquifer scale. We propose that the two problems may be solved independently. The first inverse problem consists of estimating the hydraulic head variations that drive the transient borehole flow observed in the cross-borehole flowmeter experiments. The second inverse problem is related to estimating the geometry and hydraulic properties of large-scale flowpaths in the region between pumping and observation wells that are compatible with the head variations deduced from the first problem. To solve the borehole-scale problem, we treat the transient flow data as a series of quasi-steady flow conditions and solve for the hydraulic head changes in individual fractures required to produce these data. The consistency of the method is verified using field experiments performed in a fractured-rock aquifer.

  4. Potential application of the consistency approach for vaccine potency testing.

    PubMed

    Arciniega, J; Sirota, L A

    2012-01-01

    The Consistency Approach offers the possibility of reducing the number of animals used for a potency test. However, it is critical to assess the effect that such reduction may have on assay performance. Consistency of production, sometimes referred to as consistency of manufacture or manufacturing, is an old concept implicit in regulation, which aims to ensure the uninterrupted release of safe and effective products. Consistency of manufacture can be described in terms of process capability, or the ability of a process to produce output within specification limits. For example, the standard method for potency testing of inactivated rabies vaccines is a multiple-dilution vaccination challenge test in mice that gives a quantitative, although highly variable estimate. On the other hand, a single-dilution test that does not give a quantitative estimate, but rather shows if the vaccine meets the specification has been proposed. This simplified test can lead to a considerable reduction in the number of animals used. However, traditional indices of process capability assume that the output population (potency values) is normally distributed, which clearly is not the case for the simplified approach. Appropriate computation of capability indices for the latter case will require special statistical considerations.

  5. Measurement of separate cosmic-ray electron and positron spectra with the fermi large area telescope.

    PubMed

    Ackermann, M; Ajello, M; Allafort, A; Atwood, W B; Baldini, L; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Berenji, B; Blandford, R D; Bloom, E D; Bonamente, E; Borgland, A W; Bouvier, A; Bregeon, J; Brigida, M; Bruel, P; Buehler, R; Buson, S; Caliandro, G A; Cameron, R A; Caraveo, P A; Casandjian, J M; Cecchi, C; Charles, E; Chekhtman, A; Cheung, C C; Chiang, J; Ciprini, S; Claus, R; Cohen-Tanugi, J; Conrad, J; Cutini, S; de Angelis, A; de Palma, F; Dermer, C D; Digel, S W; do Couto E Silva, E; Drell, P S; Drlica-Wagner, A; Favuzzi, C; Fegan, S J; Ferrara, E C; Focke, W B; Fortin, P; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Germani, S; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Glanzman, T; Godfrey, G; Grenier, I A; Grove, J E; Guiriec, S; Gustafsson, M; Hadasch, D; Harding, A K; Hayashida, M; Hughes, R E; Jóhannesson, G; Johnson, A S; Kamae, T; Katagiri, H; Kataoka, J; Knödlseder, J; Kuss, M; Lande, J; Latronico, L; Lemoine-Goumard, M; Llena Garde, M; Longo, F; Loparco, F; Lovellette, M N; Lubrano, P; Madejski, G M; Mazziotta, M N; McEnery, J E; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monte, C; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Nakamori, T; Nolan, P L; Norris, J P; Nuss, E; Ohno, M; Ohsugi, T; Okumura, A; Omodei, N; Orlando, E; Ormes, J F; Ozaki, M; Paneque, D; Parent, D; Pesce-Rollins, M; Pierbattista, M; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Ritz, S; Romani, R W; Roth, M; Sadrozinski, H F-W; Sbarra, C; Schalk, T L; Sgrò, C; Siskind, E J; Spandre, G; Spinelli, P; Strong, A W; Takahashi, H; Takahashi, T; Tanaka, T; Thayer, J G; Thayer, J B; Tibaldo, L; Tinivella, M; Torres, D F; Tosti, G; Troja, E; Uchiyama, Y; Usher, T L; Vandenbroucke, J; Vasileiou, V; Vianello, G; Vitale, V; Waite, A P; Winer, B L; Wood, K S; Wood, M; Yang, Z; Zimmer, S

    2012-01-06

    We measured separate cosmic-ray electron and positron spectra with the Fermi Large Area Telescope. Because the instrument does not have an onboard magnet, we distinguish the two species by exploiting Earth's shadow, which is offset in opposite directions for opposite charges due to Earth's magnetic field. We estimate and subtract the cosmic-ray proton background using two different methods that produce consistent results. We report the electron-only spectrum, the positron-only spectrum, and the positron fraction between 20 and 200 GeV. We confirm that the fraction rises with energy in the 20-100 GeV range. The three new spectral points between 100 and 200 GeV are consistent with a fraction that is continuing to rise with energy.

  6. Measurement of Separate Cosmic-Ray Electron and Positron Spectra with the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Ferrara, E. C.; Harding, A. K.; McEnery, J. E.; Moiseev, A. A.; Ackemann, M.

    2012-01-01

    We measured separate cosmic-ray electron and positron spectra with the Fermi Large Area Telescope. Because the instrument does not have an onboard magnet, we distinguish the two species by exploiting Earth's shadow, which, is offset in opposite directions for opposite charges due to Earth's magnetic field. We estimate and subtract the cosmic-ray proton background using two different methods that produce consistent results. We report the electron-only spectrum, the positron-only spectrum, and the positron fraction between 20 and 200 Ge V. We confirm that the fraction rises with energy in the 20-100 Ge V range. The three new spectral points between 100 and 200 GeV are consistent with a fraction that is continuing to rise with energy.

  7. A network-based method to evaluate quality of reproducibility of differential expression in cancer genomics studies

    PubMed Central

    Geng, Haijiang; Li, Zhihui; Li, Jiabing; Lu, Tao; Yan, Fangrong

    2015-01-01

    BACKGROUND Personalized cancer treatments depend on the determination of a patient's genetic status according to known genetic profiles for which targeted treatments exist. Such genetic profiles must be scientifically validated before they is applied to general patient population. Reproducibility of findings that support such genetic profiles is a fundamental challenge in validation studies. The percentage of overlapping genes (POG) criterion and derivative methods produce unstable and misleading results. Furthermore, in a complex disease, comparisons between different tumor subtypes can produce high POG scores that do not capture the consistencies in the functions. RESULTS We focused on the quality rather than the quantity of the overlapping genes. We defined the rank value of each gene according to importance or quality by PageRank on basis of a particular topological structure. Then, we used the p-value of the rank-sum of the overlapping genes (PRSOG) to evaluate the quality of reproducibility. Though the POG scores were low in different studies of the same disease, the PRSOG was statistically significant, which suggests that sets of differentially expressed genes might be highly reproducible. CONCLUSIONS Evaluations of eight datasets from breast cancer, lung cancer and four other disorders indicate that quality-based PRSOG method performs better than a quantity-based method. Our analysis of the components of the sets of overlapping genes supports the utility of the PRSOG method. PMID:26556852

  8. Prediction of the thermal environment and thermal response of simple panels exposed to radiant heat

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Ash, Robert L.

    1989-01-01

    A method of predicting the radiant heat flux distribution produced by a bank of tubular quartz heaters was applied to a radiant system consisting of a single unreflected lamp irradiating a flat metallic incident surface. In this manner, the method was experimentally verified for various radiant system parameter settings and used as a source of input for a finite element thermal analysis. Two finite element thermal analyses were applied to a thermal system consisting of a thin metallic panel exposed to radiant surface heating. A two-dimensional steady-state finite element thermal analysis algorithm, based on Galerkin's Method of Weighted Residuals (GFE), was formulated specifically for this problem and was used in comparison to the thermal analyzers of the Engineering Analysis Language (EAL). Both analyses allow conduction, convection, and radiation boundary conditions. Differences in the respective finite element formulation are discussed in terms of their accuracy and resulting comparison discrepancies. The thermal analyses are shown to perform well for the comparisons presented here with some important precautions about the various boundary condition models. A description of the experiment, corresponding analytical modeling, and resulting comparisons are presented.

  9. Use of artificial intelligence in the production of high quality minced meat

    NASA Astrophysics Data System (ADS)

    Kapovsky, B. R.; Pchelkina, V. A.; Plyasheshnik, P. I.; Dydykin, A. S.; Lazarev, A. A.

    2017-09-01

    A design for an automatic line for minced meat production according to new production technology based on an innovative meat milling method is proposed. This method allows the necessary degree of raw material comminution at the stage of raw material preparation to be obtained, which leads to production intensification due to the traditional meat mass comminution equipment being unnecessary. To ensure consistent quality of the product obtained, the use of on-line automatic control of the technological process for minced meat production is envisaged. This system has been developed using artificial intelligence methods and technologies. The system is trainable during the operation process, adapts to changes in processed raw material characteristics and to external impacts that affect the system operation, and manufactures meat shavings with minimal dispersion of the typical particle size. The control system includes equipment for express analysis of the chemical composition of the minced meat and its temperature after comminution. In this case, the minced meat production process can be controlled strictly as a function of time, which excludes subjective factors for assessing the degree of finished product readiness. This will allow finished meat products with consistent, targeted high quality to be produced.

  10. Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language

    PubMed Central

    2008-01-01

    A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves. PMID:18802499

  11. Hierarchical mutual information for the comparison of hierarchical community structures in complex networks

    NASA Astrophysics Data System (ADS)

    Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido

    2015-12-01

    The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks.

  12. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  13. GAMA/H-ATLAS: a meta-analysis of SFR indicators - comprehensive measures of the SFR-M* relation and cosmic star formation history at z < 0.4

    NASA Astrophysics Data System (ADS)

    Davies, L. J. M.; Driver, S. P.; Robotham, A. S. G.; Grootes, M. W.; Popescu, C. C.; Tuffs, R. J.; Hopkins, A.; Alpaslan, M.; Andrews, S. K.; Bland-Hawthorn, J.; Bremer, M. N.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Croom, S.; da Cunha, E.; Dunne, L.; Lara-López, M. A.; Liske, J.; Loveday, J.; Moffett, A. J.; Owers, M.; Phillipps, S.; Sansom, A. E.; Taylor, E. N.; Michalowski, M. J.; Ibar, E.; Smith, M.; Bourne, N.

    2016-09-01

    We present a meta-analysis of star formation rate (SFR) indicators in the Galaxy And Mass Assembly (GAMA) survey, producing 12 different SFR metrics and determining the SFR-M* relation for each. We compare and contrast published methods to extract the SFR from each indicator, using a well-defined local sample of morphologically selected spiral galaxies, which excludes sources which potentially have large recent changes to their SFR. The different methods are found to yield SFR-M* relations with inconsistent slopes and normalizations, suggesting differences between calibration methods. The recovered SFR-M* relations also have a large range in scatter which, as SFRs of the targets may be considered constant over the different time-scales, suggests differences in the accuracy by which methods correct for attenuation in individual targets. We then recalibrate all SFR indicators to provide new, robust and consistent luminosity-to-SFR calibrations, finding that the most consistent slopes and normalizations of the SFR-M* relations are obtained when recalibrated using the radiation transfer method of Popescu et al. These new calibrations can be used to directly compare SFRs across different observations, epochs and galaxy populations. We then apply our calibrations to the GAMA II equatorial data set and explore the evolution of star formation in the local Universe. We determine the evolution of the normalization to the SFR-M* relation from 0 < z < 0.35 - finding consistent trends with previous estimates at 0.3 < z < 1.2. We then provide the definitive z < 0.35 cosmic star formation history, SFR-M* relation and its evolution over the last 3 billion years.

  14. Nicotine Vapor Method to Induce Nicotine Dependence in Rodents.

    PubMed

    Kallupi, Marsida; George, Olivier

    2017-07-05

    Nicotine, the main addictive component of tobacco, induces potentiation of brain stimulation reward, increases locomotor activity, and induces conditioned place preference. Nicotine cessation produces a withdrawal syndrome that can be relieved by nicotine replacement therapy. In the last decade, the market for electronic cigarettes has flourished, especially among adolescents. The nicotine vaporizer or electronic nicotine delivery system is a battery-operated device that allows the user to simulate the experience of tobacco smoking without inhaling smoke. The device is designed to be an alternative to conventional cigarettes that emits vaporized nicotine inhaled by the user. This report describes a procedure to vaporize nicotine in the air to produce blood nicotine levels in rodents that are clinically relevant to those that are observed in humans and produce dependence. We also describe how to construct the apparatus to deliver nicotine vapor in a stable, reliable, and consistent manner, as well as how to analyze air for nicotine content. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  15. Re-cycling of sugar-ash: a raw feed material for rotary kilns.

    PubMed

    Kantiranis, Nikolaos

    2004-01-01

    Large amounts of sugar-ash, a material rich in calcium carbonate, are produced as a by-product in the Greek Sugar Industry. This work explores the possibility of re-cycling sugar-ash for use in the lime industry. A representative sample of sugar-ash from the Plati Imathias sugar plant was studied by PXRD, TG/DTG, calcination experiments at temperatures between 650 and 1150 degrees C and experiments to determine the quality of the quicklime produced at temperatures between 850 and 1150 degrees C following methods described in ASTM C110 standard. The sugar-ash was found to consist of 90 wt% calcium rich minerals (calcite and monohydrocalcite) and 10 wt% amorphous material. Traces of quartz were also detected. The quicklime of highest quality was produced at 950 degrees C. It is concluded that this "useless" material (sugar-ash) can be re-cycled for use in rotary kilns in the lime industry at calcination temperatures up to 950-1000 degrees C.

  16. SEPARATION OF GASES BY DIFFUSIION

    DOEpatents

    Peierls, R.E.; Simon, F.E.; Arms, H.S.

    1960-12-13

    A method and apparatus are given for the separation of mixtures of gaseous or vaporous media by diffusion through a permeable membrane. The apparatus consists principally of a housing member having an elongated internal chamber dissected longitudinally by a permeable membrane. Means are provided for producing a pressure difference between opposite sides of the membrane to cause a flow of the media in the chamber therethrough. This pressure difference is alternated between opposite sides of the membrane to produce an oscillating flow through the membrane. Additional means is provided for producing flow parallel to the membrane in opposite directions on the two sides thereof and of the same frequency and in phase with the alternating pressure difference. The lighter molecules diffuse through the membrane more readily than the heavier molecules and the parallel flow effects a net transport of the lighter molecules in one direction and the heavier molecules in the opposite direction within the chamber. By these means a concentration gradient along the chamber is established.

  17. [Feedforward control strategy and its application in quality improvement of ethanol precipitation process of danhong injection].

    PubMed

    Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao

    2013-06-01

    In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.

  18. A Method for Calculating the Area of Zostera marina Leaves from Digital Images with Noise Induced by Humidity Content

    PubMed Central

    Leal-Ramirez, Cecilia

    2014-01-01

    Despite the ecological importance of eelgrass, nowadays anthropogenic influences have produced deleterious effects in many meadows worldwide. Transplantation plots are commonly used as a feasible remediation scheme. The characterization of eelgrass biomass and its dynamics is an important input for the assessment of the overall status of both natural and transplanted populations. Particularly, in restoration plots it is desirable to obtain nondestructive assessments of these variables. Allometric models allow the expression of above ground biomass and productivity of eelgrass in terms of leaf area, which provides cost effective and nondestructive assessments. Leaf area in eelgrass can be conveniently obtained by the product of associated length and width. Although these variables can be directly measured on most sampled leaves, digital image methods could be adapted in order to simplify measurements. Nonetheless, since width to length ratios in eelgrass leaves could be even negligible, noise induced by leaf humidity content could produce misidentification of pixels along the peripheral contour of leaves images. In this paper, we present a procedure aimed to produce consistent estimations of eelgrass leaf area in the presence of the aforementioned noise effects. Our results show that digital image procedures can provide reliable, nondestructive estimations of eelgrass leaf area. PMID:24892089

  19. Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization

    NASA Technical Reports Server (NTRS)

    Yowell, Leonard; Nikolaev, Pavel; Gorelik, Olga; Allada, Rama Kumar; Sosa, Edward; Arepalli, Sivaram

    2013-01-01

    The "soft-bake" method is a simple and reliable initial purification step first proposed by researchers at Rice University for single-walled carbon nanotubes (SWCNT) produced by high-pressure carbon mon oxide disproportionation (HiPco). Soft-baking consists of annealing as-produced (raw) SWCNT, at low temperatures in humid air, in order to degrade the heavy graphitic shells that surround metal particle impurities. Once these shells are cracked open by the expansion and slow oxidation of the metal particles, the metal impurities can be digested through treatment with hydrochloric acid. The soft-baking of SWCNT produced by pulsed-laser vaporization (PLV) is not straightforward, because the larger average SWCNT diameters (.1.4 nm) and heavier graphitic shells surrounding metal particles call for increased temperatures during soft-bake. A part of the technology development focused on optimizing the temperature so that effective cracking of the graphitic shells is balanced with maintaining a reasonable yield, which was a critical aspect of this study. Once the ideal temperature was determined, a number of samples of raw SWCNT were purified using the soft-bake method. An important benefit to this process is the reduced time and effort required for soft-bake versus the standard purification route for SWCNT. The total time spent purifying samples by soft-bake is one week per batch, which equates to a factor of three reduction in the time required for purification as compared to the standard acid purification method. Reduction of the number of steps also appears to be an important factor in improving reproducibility of yield and purity of SWCNT, as small deviations are likely to get amplified over the course of a complicated multi-step purification process.

  20. Revegetation of Acid Rock Drainage (ARD) Producing Slope Surface Using Phosphate Microencapsulation and Artificial Soil

    NASA Astrophysics Data System (ADS)

    Kim, Jae Gon

    2017-04-01

    Oxidation of sulfides produces acid rock drainage (ARD) upon their exposure to oxidation environment by construction and mining activities. The ARD causes the acidification and metal contamination of soil, surface water and groundwater, the damage of plant, the deterioration of landscape and the reduction of slope stability. The revegetation of slope surface is one of commonly adopted strategies to reduce erosion and to increase slope stability. However, the revegetation of the ARD producing slope surface is frequently failed due to its high acidity and toxic metal content. We developed a revegetation method consisting of microencapsualtion and artificial soil in the laboratory. The revegetation method was applied on the ARD producing slope on which the revegetation using soil coverage and seeding was failed and monitored the plant growth for one year. The phosphate solution was applied on sulfide containing rock to form stable Fe-phosphate mineral on the surface of sulfide, which worked as a physical barrier to prevent contacting oxidants such as oxygen and Fe3+ ion to the sulfide surface. After the microencapsulation, two artificial soil layers were constructed. The first layer containing organic matter, dolomite powder and soil was constructed at 2 cm thickness to neutralize the rising acidic capillary water from the subsurface and to remove the dissolved oxygen from the percolating rain water. Finally, the second layer containing seeds, organic matter, nutrients and soil was constructed at 3 cm thickness on the top. After application of the method, the pH of the soil below the artificial soil layer increased and the ARD production from the rock fragments reduced. The plant growth showed an ordinary state while the plant died two month after germination for the previous revegetation trial. No soil erosion occurred from the slope during the one year field test.

  1. The method ADAMONT v1.0 for statistical adjustment of climate projections applicable to energy balance land surface models

    NASA Astrophysics Data System (ADS)

    Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu

    2017-11-01

    We introduce the method ADAMONT v1.0 to adjust and disaggregate daily climate projections from a regional climate model (RCM) using an observational dataset at hourly time resolution. The method uses a refined quantile mapping approach for statistical adjustment and an analogous method for sub-daily disaggregation. The method ultimately produces adjusted hourly time series of temperature, precipitation, wind speed, humidity, and short- and longwave radiation, which can in turn be used to force any energy balance land surface model. While the method is generic and can be employed for any appropriate observation time series, here we focus on the description and evaluation of the method in the French mountainous regions. The observational dataset used here is the SAFRAN meteorological reanalysis, which covers the entire French Alps split into 23 massifs, within which meteorological conditions are provided for several 300 m elevation bands. In order to evaluate the skills of the method itself, it is applied to the ALADIN-Climate v5 RCM using the ERA-Interim reanalysis as boundary conditions, for the time period from 1980 to 2010. Results of the ADAMONT method are compared to the SAFRAN reanalysis itself. Various evaluation criteria are used for temperature and precipitation but also snow depth, which is computed by the SURFEX/ISBA-Crocus model using the meteorological driving data from either the adjusted RCM data or the SAFRAN reanalysis itself. The evaluation addresses in particular the time transferability of the method (using various learning/application time periods), the impact of the RCM grid point selection procedure for each massif/altitude band configuration, and the intervariable consistency of the adjusted meteorological data generated by the method. Results show that the performance of the method is satisfactory, with similar or even better evaluation metrics than alternative methods. However, results for air temperature are generally better than for precipitation. Results in terms of snow depth are satisfactory, which can be viewed as indicating a reasonably good intervariable consistency of the meteorological data produced by the method. In terms of temporal transferability (evaluated over time periods of 15 years only), results depend on the learning period. In terms of RCM grid point selection technique, the use of a complex RCM grid points selection technique, taking into account horizontal but also altitudinal proximity to SAFRAN massif centre points/altitude couples, generally degrades evaluation metrics for high altitudes compared to a simpler grid point selection method based on horizontal distance.

  2. Fast generation of complex modulation video holograms using temporal redundancy compression and hybrid point-source/wave-field approaches

    NASA Astrophysics Data System (ADS)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2015-09-01

    The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.

  3. Mechanical, thermal and morphological characterization of polycarbonate/oxidized carbon nanofiber composites produced with a lean 2-step manufacturing process.

    PubMed

    Lively, Brooks; Kumar, Sandeep; Tian, Liu; Li, Bin; Zhong, Wei-Hong

    2011-05-01

    In this study we report the advantages of a 2-step method that incorporates an additional process pre-conditioning step for rapid and precise blending of the constituents prior to the commonly used melt compounding method for preparing polycarbonate/oxidized carbon nanofiber composites. This additional step (equivalent to a manufacturing cell) involves the formation of a highly concentrated solid nano-nectar of polycarbonate/carbon nanofiber composite using a solution mixing process followed by melt mixing with pure polycarbonate. This combined method yields excellent dispersion and improved mechanical and thermal properties as compared to the 1-step melt mixing method. The test results indicated that inclusion of carbon nanofibers into composites via the 2-step method resulted in dramatically reduced ( 48% lower) coefficient of thermal expansion compared to that of pure polycarbonate and 30% lower than that from the 1-step processing, at the same loading of 1.0 wt%. Improvements were also found in dynamic mechanical analysis and flexural mechanical properties. The 2-step approach is more precise and leads to better dispersion, higher quality, consistency, and improved performance in critical application areas. It is also consistent with Lean Manufacturing principles in which manufacturing cells are linked together using less of the key resources and creates a smoother production flow. Therefore, this 2-step process can be more attractive for industry.

  4. Biomolecular crystals for material applications and a mechanistic study of an iron oxide nanoparticle synthesis

    NASA Astrophysics Data System (ADS)

    Falkner, Joshua Charles

    The three projects within this work address the difficulties of controlling biomolecular crystal formats (i.e. size and shape), producing 3-D ordered composite materials from biomolecular crystal templates, and understanding the mechanism of a practical iron oxide synthesis. The unifying thread consistent throughout these three topics is the development of methods to manipulate nanomaterials using a bottom-up approach. Biomolecular crystals are nanometer to millimeter sized crystals that have well ordered mesoporous solvent channels. The overall physical dimensions of these crystals are highly dependent on crystallization conditions. The controlled growth of micro- and nanoprotein crystals was studied to provide new pathways for creating smaller crystalline protein materials. This method produced tetragonal hen egg-white lysozyme crystals (250--100,000 nm) with near monodisperse size distributions (<15%). With this degree of control, existing protein crystal applications such as drug delivery and analytical sensors can reach their full potential. Applications for larger crystals with inherently ubiquitous pore structures could extend to materials used for membranes or templates. In this work, the porous structure of larger cowpea mosaic virus crystals was used to template metal nanoparticle growth within the body centered cubic crystalline network. The final composite material was found to have long range ordering of palladium and platinum nonocrystal aggregates (10nm) with symmetry consistent to the virus template. Nanoparticle synthesis itself is an immense field of study with an array of diverse applications. The final piece of this work investigates the mechanism behind a previously developed iron oxide synthesis to gain more understanding and direction to future synthesis strategies. The particle growth mechanism was found to proceed by the formation of a solvated iron(III)oleate complex followed by a reduction of iron (III) to iron (II). This unstable iron(II) nucleates to form a wustite (FeO) core which serves as an epitaxial surface for the magnetite (Fe3O4) shell growth. This method produces spherical particles (6-60nm) with relative size distributions of less than 15%.

  5. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    PubMed

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. An evaluation of fossil tip-dating versus node-age calibrations in tetraodontiform fishes (Teleostei: Percomorphaceae).

    PubMed

    Arcila, Dahiana; Alexander Pyron, R; Tyler, James C; Ortí, Guillermo; Betancur-R, Ricardo

    2015-01-01

    Time-calibrated phylogenies based on molecular data provide a framework for comparative studies. Calibration methods to combine fossil information with molecular phylogenies are, however, under active development, often generating disagreement about the best way to incorporate paleontological data into these analyses. This study provides an empirical comparison of the most widely used approach based on node-dating priors for relaxed clocks implemented in the programs BEAST and MrBayes, with two recently proposed improvements: one using a new fossilized birth-death process model for node dating (implemented in the program DPPDiv), and the other using a total-evidence or tip-dating method (implemented in MrBayes and BEAST). These methods are applied herein to tetraodontiform fishes, a diverse group of living and extinct taxa that features one of the most extensive fossil records among teleosts. Previous estimates of time-calibrated phylogenies of tetraodontiforms using node-dating methods reported disparate estimates for their age of origin, ranging from the late Jurassic to the early Paleocene (ca. 150-59Ma). We analyzed a comprehensive dataset with 16 loci and 210 morphological characters, including 131 taxa (95 extant and 36 fossil species) representing all families of fossil and extant tetraodontiforms, under different molecular clock calibration approaches. Results from node-dating methods produced consistently younger ages than the tip-dating approaches. The older ages inferred by tip dating imply an unlikely early-late Jurassic (ca. 185-119Ma) origin for this order and the existence of extended ghost lineages in their fossil record. Node-based methods, by contrast, produce time estimates that are more consistent with the stratigraphic record, suggesting a late Cretaceous (ca. 86-96Ma) origin. We show that the precision of clade age estimates using tip dating increases with the number of fossils analyzed and with the proximity of fossil taxa to the node under assessment. This study suggests that current implementations of tip dating may overestimate ages of divergence in calibrated phylogenies. It also provides a comprehensive phylogenetic framework for tetraodontiform systematics and future comparative studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Method: a single nucleotide polymorphism genotyping method for Wheat streak mosaic virus

    PubMed Central

    2012-01-01

    Background The September 11, 2001 attacks on the World Trade Center and the Pentagon increased the concern about the potential for terrorist attacks on many vulnerable sectors of the US, including agriculture. The concentrated nature of crops, easily obtainable biological agents, and highly detrimental impacts make agroterrorism a potential threat. Although procedures for an effective criminal investigation and attribution following such an attack are available, important enhancements are still needed, one of which is the capability for fine discrimination among pathogen strains. The purpose of this study was to develop a molecular typing assay for use in a forensic investigation, using Wheat streak mosaic virus (WSMV) as a model plant virus. Method This genotyping technique utilizes single base primer extension to generate a genetic fingerprint. Fifteen single nucleotide polymorphisms (SNPs) within the coat protein and helper component-protease genes were selected as the genetic markers for this assay. Assay optimization and sensitivity testing was conducted using synthetic targets. WSMV strains and field isolates were collected from regions around the world and used to evaluate the assay for discrimination. The assay specificity was tested against a panel of near-neighbors consisting of genetic and environmental near-neighbors. Result Each WSMV strain or field isolate tested produced a unique SNP fingerprint, with the exception of three isolates collected within the same geographic location that produced indistinguishable fingerprints. The results were consistent among replicates, demonstrating the reproducibility of the assay. No SNP fingerprints were generated from organisms included in the near-neighbor panel, suggesting the assay is specific for WSMV. Using synthetic targets, a complete profile could be generated from as low as 7.15 fmoles of cDNA. Conclusion The molecular typing method presented is one tool that could be incorporated into the forensic science tool box after a thorough validation study. This method incorporates molecular biology techniques that are already well established in research and diagnostic laboratories, allowing for an easy introduction of this method into existing laboratories. Keywords: single nucleotide polymorphisms, genotyping, plant pathology, viruses, microbial forensics, Single base primer extension, SNaPshot Multiplex Kit PMID:22594601

  8. Methods for producing complex films, and films produced thereby

    DOEpatents

    Duty, Chad E.; Bennett, Charlee J. C.; Moon, Ji -Won; Phelps, Tommy J.; Blue, Craig A.; Dai, Quanqin; Hu, Michael Z.; Ivanov, Ilia N.; Jellison, Jr., Gerald E.; Love, Lonnie J.; Ott, Ronald D.; Parish, Chad M.; Walker, Steven

    2015-11-24

    A method for producing a film, the method comprising melting a layer of precursor particles on a substrate until at least a portion of the melted particles are planarized and merged to produce the film. The invention is also directed to a method for producing a photovoltaic film, the method comprising depositing particles having a photovoltaic or other property onto a substrate, and affixing the particles to the substrate, wherein the particles may or may not be subsequently melted. Also described herein are films produced by these methods, methods for producing a patterned film on a substrate, and methods for producing a multilayer structure.

  9. Qualitative Comparison of Streamflow Information Programs of the U.S. Geological Survey and Three Non-Federal Agencies

    USGS Publications Warehouse

    Norris, J. Michael; Lewis, Michael; Dorsey, Michael; Kimbrough, Robert; Holmes, Robert R.; Staubitz, Ward

    2008-01-01

    A qualitative comparison was made of the streamgaging programs of the U.S. Geological Survey (USGS) and three non-Federal agencies in terms of approximate costs and streamflow-information products produced. The three non-Federal agencies provided the USGS with detailed information on their streamgaging program and related costs, and the USGS explored, through publicly available Web sites and one-on-one discussions, the comparability of the streamflow information produced. The type and purpose of streamgages operated, the quality of streamflow record produced, and cost-accounting methods have a great effect on streamgaging costs. There are many uses of streamflow information, and the information requirements for streamgaging programs differ greatly across this range of purposes. A premise of the USGS streamgaging program is that the network must produce consistent data of sufficient quality to support the broadest range of possible uses. Other networks may have a narrower range of purposes; as a consequence, the method of operation, data-quality objectives, and information delivery may be different from those for a multipurpose network. As a result, direct comparison of the overall cost (or of the cost per streamgage) among these programs is not possible. The analysis is, nonetheless, very instructive and provides USGS program managers, agency leadership, and other agency streamgaging program managers useful insight to influence future decisions. Even though the comparison of streamgaging costs and streamflow information products was qualitative, this analysis does offer useful insights on longstanding questions of USGS streamgaging costs.

  10. Production of recombinant adeno-associated vectors using two bioreactor configurations at different scales

    PubMed Central

    Negrete, Alejandro; Kotin, Robert M.

    2007-01-01

    The conventional methods for producing recombinant adeno-associated virus (rAAV) rely on transient transfection of adherent mammalian cells. To gain acceptance and achieve current good manufacturing process (cGMP) compliance, clinical grade rAAV production process should have the following qualities: simplicity, consistency, cost effectiveness, and scalability. Currently, the only viable method for producing rAAV in large-scale, e.g.≥1016 particles per production run, utilizes Baculovirus Expression Vectors (BEVs) and insect cells suspension cultures. The previously described rAAV production in 40 L culture using a stirred tank bioreactor requires special conditions for implementation and operation not available in all laboratories. Alternatives to producing rAAV in stirred-tank bioreactors are single-use, disposable bioreactors, e.g. Wave™. The disposable bags are purchased pre-sterilized thereby eliminating the need for end-user sterilization and also avoiding cleaning steps between production runs thus facilitating the production process. In this study, rAAV production in stirred tank and Wave™ bioreactors was compared. The working volumes were 10 L and 40 L for the stirred tank bioreactors and 5 L and 20 L for the Wave™ bioreactors. Comparable yields of rAAV, ~2e+13 particles per liter of cell culture were obtained in all volumes and configurations. These results demonstrate that producing rAAV in large scale using BEVs is reproducible, scalable, and independent of the bioreactor configuration. Keywords: adeno-associated vectors; large-scale production; stirred tank bioreactor; wave bioreactor; gene therapy. PMID:17606302

  11. Functional Wigner representation of quantum dynamics of Bose-Einstein condensate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opanchuk, B.; Drummond, P. D.

    2013-04-15

    We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such asmore » quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.« less

  12. Predicting recreational water quality advisories: A comparison of statistical methods

    USGS Publications Warehouse

    Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.

    2016-01-01

    Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.

  13. Passive particle dosimetry. [silver halide crystal growth

    NASA Technical Reports Server (NTRS)

    Childs, C. B.

    1977-01-01

    Present methods of dosimetry are reviewed with emphasis on the processes using silver chloride crystals for ionizing particle dosimetry. Differences between the ability of various crystals to record ionizing particle paths are directly related to impurities in the range of a few ppm (parts per million). To understand the roles of these impurities in the process, a method for consistent production of high purity silver chloride, and silver bromide was developed which yields silver halides with detectable impurity content less than 1 ppm. This high purity silver chloride was used in growing crystals with controlled doping. Crystals were grown by both the Czochalski method and the Bridgman method, and the Bridgman grown crystals were used for the experiments discussed. The distribution coefficients of ten divalent cations were determined for the Bridgman crystals. The best dosimeters were made with silver chloride crystals containing 5 to 10 ppm of lead; other impurities tested did not produce proper dosimeters.

  14. Capillary electrophoresis method for the analysis of organic acids and amino acids in the presence of strongly alternating concentrations of aqueous lactic acid.

    PubMed

    Laube, Hendrik; Boden, Jana; Schneider, Roland

    2017-07-01

    During the production of bio-based bulk chemicals, such as lactic acid (LA), organic impurities have to be removed to produce a ready-to-market product. A capillary electrophoresis method for the simultaneous detection of LA and organic impurities in less than 10 min was developed. LA and organic impurities were detected using a direct UV detection method with micellar background electrolyte, which consisted of borate and sodium dodecyl sulfate. We investigated the effects of electrolyte composition and temperature on the speed, sensitivity, and robustness of the separation. A few validation parameters, such as linearity, limit of detection, and internal and external standards, were evaluated under optimized conditions. The method was applied for the detection of LA and organic impurities, including tyrosine, phenylalanine, and pyroglutamic acid, in samples from a continuous LA fermentation process from post-extraction tapioca starch and yeast extract.

  15. Pretreatment of Cellulose By Electron Beam Irradiation Method

    NASA Astrophysics Data System (ADS)

    Jusri, N. A. A.; Azizan, A.; Ibrahim, N.; Salleh, R. Mohd; Rahman, M. F. Abd

    2018-05-01

    Pretreatment process of lignocellulosic biomass (LCB) to produce biofuel has been conducted by using various methods including physical, chemical, physicochemical as well as biological. The conversion of bioethanol process typically involves several steps which consist of pretreatment, hydrolysis, fermentation and separation. In this project, microcrystalline cellulose (MCC) was used in replacement of LCB since cellulose has the highest content of LCB for the purpose of investigating the effectiveness of new pretreatment method using radiation technology. Irradiation with different doses (100 kGy to 1000 kGy) was conducted by using electron beam accelerator equipment at Agensi Nuklear Malaysia. Fourier Transform Infrared Spectroscopy (FTIR) and X-Ray Diffraction (XRD) analyses were studied to further understand the effect of the suggested pretreatment step to the content of MCC. Through this method namely IRR-LCB, an ideal and optimal condition for pretreatment prior to the production of biofuel by using LCB may be introduced.

  16. The lowest-order weak Galerkin finite element method for the Darcy equation on quadrilateral and hybrid meshes

    NASA Astrophysics Data System (ADS)

    Liu, Jiangguo; Tavener, Simon; Wang, Zhuoran

    2018-04-01

    This paper investigates the lowest-order weak Galerkin finite element method for solving the Darcy equation on quadrilateral and hybrid meshes consisting of quadrilaterals and triangles. In this approach, the pressure is approximated by constants in element interiors and on edges. The discrete weak gradients of these constant basis functions are specified in local Raviart-Thomas spaces, specifically RT0 for triangles and unmapped RT[0] for quadrilaterals. These discrete weak gradients are used to approximate the classical gradient when solving the Darcy equation. The method produces continuous normal fluxes and is locally mass-conservative, regardless of mesh quality, and has optimal order convergence in pressure, velocity, and normal flux, when the quadrilaterals are asymptotically parallelograms. Implementation is straightforward and results in symmetric positive-definite discrete linear systems. We present numerical experiments and comparisons with other existing methods.

  17. DairyBeef: maximizing quality and profits--a consistent food safety message.

    PubMed

    Moore, D A; Kirk, J H; Klingborg, D J; Garry, F; Wailes, W; Dalton, J; Busboom, J; Sams, R W; Poe, M; Payne, M; Marchello, J; Looper, M; Falk, D; Wright, T

    2004-01-01

    To respond to meat safety and quality issues in dairy market cattle, a collaborative project team for 7 western states was established to develop educational resources providing a consistent meat safety and quality message to dairy producers, farm advisors, and veterinarians. The team produced an educational website and CD-ROM course that included videos, narrated slide sets, and on-farm tools. The objectives of this course were: 1) to help producers and their advisors understand market cattle food safety and quality issues, 2) help maintain markets for these cows, and 3) help producers identify ways to improve the quality of dairy cattle going to slaughter. DairyBeef. Maximizing Quality & Profits consists of 6 sections, including 4 core segments. Successful completion of quizzes following each core segment is required for participants to receive a certificate of completion. A formative evaluation of the program revealed the necessity for minor content and technological changes with the web-based course. All evaluators considered the materials relevant to dairy producers. After editing, course availability was enabled in February, 2003. Between February and May, 2003, 21 individuals received certificates of completion.

  18. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  19. SU-F-J-200: An Improved Method for Event Selection in Compton Camera Imaging for Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackin, D; Beddar, S; Polf, J

    2016-06-15

    Purpose: The uncertainty in the beam range in particle therapy limits the conformality of the dose distributions. Compton scatter cameras (CC), which measure the prompt gamma rays produced by nuclear interactions in the patient tissue, can reduce this uncertainty by producing 3D images confirming the particle beam range and dose delivery. However, the high intensity and short time windows of the particle beams limit the number of gammas detected. We attempt to address this problem by developing a method for filtering gamma ray scattering events from the background by applying the known gamma ray spectrum. Methods: We used a 4more » stage Compton camera to record in list mode the energy deposition and scatter positions of gammas from a Co-60 source. Each CC stage contained a 4×4 array of CdZnTe crystal. To produce images, we used a back-projection algorithm and four filtering Methods: basic, energy windowing, delta energy (ΔE), or delta scattering angle (Δθ). Basic filtering requires events to be physically consistent. Energy windowing requires event energy to fall within a defined range. ΔE filtering selects events with the minimum difference between the measured and a known gamma energy (1.17 and 1.33 MeV for Co-60). Δθ filtering selects events with the minimum difference between the measured scattering angle and the angle corresponding to a known gamma energy. Results: Energy window filtering reduced the FWHM from 197.8 mm for basic filtering to 78.3 mm. ΔE and Δθ filtering achieved the best results, FWHMs of 64.3 and 55.6 mm, respectively. In general, Δθ filtering selected events with scattering angles < 40°, while ΔE filtering selected events with angles > 60°. Conclusion: Filtering CC events improved the quality and resolution of the corresponding images. ΔE and Δθ filtering produced similar results but each favored different events.« less

  20. A New Optical Technique for Rapid Determination of Creep and Fatigue Thresholds at High Temperature.

    DTIC Science & Technology

    1984-04-01

    measurements, made far away from the crack tip, produced much smoother and more sensible results. Measurements by Macha et al (16) agree very well with...dependent upon the measurement positin. It becomes independent of position far enough away from the tip; this is consistent with the results of Macha , et...D. E. Macha , W. N. Sharpe, Jr., and A. P. ’ral(11, ’.., "A Laser Interferometry Method for ,xp.rim,-rit;a1 Stress Intensity Factor Calibration", AST

  1. Sintering of compacts of UN, (U,Pu)N, and PuN

    DOEpatents

    Tennery, V.J.; Godfrey, T.G.; Bomar, E.S.

    1973-10-16

    >A method is provided for preparing a densified compact of a metal nitride selected from the group consisting of UN, (U,Pu)N, and PuN which comprises heating a green compact of at least one selected nitride in the mononitride single-phase region, as displayed by a phase diagram of the mononitride of said compact, in a nitrogen atmosphere at a pressure of nitrogen less than 760 torr. At a given temperature, this process produces a singlephase structure and a maximal sintered density as measured by mercury displacement. (Official Gazette)

  2. Analysis of wind-driven ambient noise in a shallow water environment with a sandy seabed.

    PubMed

    Knobles, D P; Joshi, S M; Gaul, R D; Graber, H C; Williams, N J

    2008-09-01

    On the New Jersey continental shelf ambient sound levels were recorded during tropical storm Ernesto that produced wind speeds up to 40 knots in early September 2006. The seabed at the position of the acoustic measurements can be approximately described as coarse sand. Differences between the ambient noise levels for the New Jersey shelf measurements and deep water reference measurements are modeled using both normal mode and ray methods. The analysis is consistent with a nonlinear frequency dependent seabed attenuation for the New Jersey site.

  3. Fatigue FEM analysis in the case of brazed aluminium alloy 3L59 used in aeronautical industry

    NASA Astrophysics Data System (ADS)

    Dimitrescu, A.; Amza, Gh; Niţoi, D. F.; Amza, C. Gh; Apostolescu, Z.

    2016-08-01

    The use, on a larger scale, of brazed aluminum alloys in the aerospace industry led to the need for a detailed study of the assemblies behavior. These are built from 6061 aluminum aloy (3L59) brazed with aluminum aloy A103. Therefore, a finit element simulation (FEM) of durability is necessary, that consists in the observation of gradual deterioration until failure. These studies are required and are previous to the stage of the producing the assembly and test it by traditional methods.

  4. A Soil-free System for Assaying Nematicidal Activity of Chemicals

    PubMed Central

    Preiser, F. A.; Babu, J. R.; Haidri, A. A.

    1981-01-01

    A biological assay system for studying the nematicidal activity of chemicals has been devised using a model consisting of cucumber (Cucumis sativus L. cv. Long Marketer) seedlings growing in the diSPo® growth-pouch apparatus. Meloidogyne incognita was used as the test organism. The response was quantified in terms of the numbers of galls produced. Statistical procedures were applied to estimate the ED50 values of currently available nematicides. This system permits accurate quantification of galling and requires much less space and effort than the currently used methods. PMID:19300800

  5. A Soil-free System for Assaying Nematicidal Activity of Chemicals.

    PubMed

    Preiser, F A; Babu, J R; Haidri, A A

    1981-10-01

    A biological assay system for studying the nematicidal activity of chemicals has been devised using a model consisting of cucumber (Cucumis sativus L. cv. Long Marketer) seedlings growing in the diSPo(R) growth-pouch apparatus. Meloidogyne incognita was used as the test organism. The response was quantified in terms of the numbers of galls produced. Statistical procedures were applied to estimate the ED(50) values of currently available nematicides. This system permits accurate quantification of galling and requires much less space and effort than the currently used methods.

  6. The manufacture of synthetic non-sintered and degradable bone grafting substitutes.

    PubMed

    Gerike, W; Bienengräber, V; Henkel, K-O; Bayerlein, T; Proff, P; Gedrange, T; Gerber, Th

    2006-02-01

    A new synthetic bone grafting substitute (NanoBone, ARTOSS GmbH, Germany) is presented. This is produced by a new technique, the sol-gel-method. This bone grafting substitute consists of nanocrystalline hydroxyapatite (HA) and nanostructured silica (SiO2). By achieving a highly porous structure good osteoconductivity can be seen. In addition, the material will be completely biodegraded and new own bone is formed. It has been demonstrated that NanoBone is biodegraded by osteoclasts in a manner comparable to the natural bone remodelling process.

  7. Comparison of subjective, pharmacokinetic, and physiologic effects of marijuana smoked as joints and blunts

    PubMed Central

    Cooper, Ziva D.; Haney, Margaret

    2009-01-01

    Recent increases in marijuana smoking among the young adult population have been accompanied by the popularization of smoking marijuana as blunts instead of as joints. Blunts consist of marijuana wrapped in tobacco leaves, whereas joints consist of marijuana wrapped in cigarette paper. To date, the effects of marijuana smoked as joints and blunts have not been systematically compared. The current within-subject, randomized, double-blind, placebo-controlled study sought to directly compare the subjective, physiologic, and pharmacokinetic effects of marijuana smoked by these two methods. Marijuana blunt smokers (12 women; 12 men) were recruited and participated in a 6-session outpatient study. Participants were blindfolded and smoked three puffs from either a blunt or a joint containing marijuana with varying delta-9-tetrahydrocannabinol (THC) concentrations (0.0, 1.8, and 3.6%). Subjective, physiological (heart rate, blood pressure, carbon monoxide levels) and pharmacokinetic effects (plasma THC concentration) were monitored before and at specified time points for three hours after smoking. Joints produced greater increases in plasma THC and subjective ratings of marijuana intoxication, strength, and quality compared to blunts, and these effects were more pronounced in women compared to men. However, blunts produced equivalent increases in heart rate and higher carbon monoxide levels than joints, despite producing lower levels of plasma THC. These findings demonstrate that smoking marijuana in a tobacco leaf may increase the risks of marijuana use by enhancing carbon monoxide exposure and increasing heart rate compared to joints. PMID:19443132

  8. New contraceptive eligibility checklists for provision of combined oral contraceptives and depot-medroxyprogesterone acetate in community-based programmes.

    PubMed Central

    Stang, A.; Schwingl, P.; Rivera, R.

    2000-01-01

    Community-based services (CBS) have long used checklists to determine eligibility for contraceptive method use, in particular for combined oral contraceptives (COCs) and the 3-month injectable contraceptive depot-medroxyprogesterone acetate (DMPA). As safety information changes, however, checklists can quickly become outdated. Inconsistent checklists and eligibility criteria often cause uneven access to contraceptives. In 1996, WHO produced updated eligibility criteria for the use of all contraceptive methods. Based on these criteria, new checklists for COCs and DMPA were developed. This article describes the new checklists and their development. Several rounds of expert review produced checklists that were correct, comprehensible and consistent with the eligibility requirements. Nevertheless, field-testing of the checklists revealed that approximately half (48%) of the respondents felt that one or more questions still needed greater comprehensibility. These findings indicated the need for a checklist guide. In March 2000, WHO convened a meeting of experts to review the medical eligibility criteria for contraceptive use. The article reflects also the resulting updated checklist. PMID:10994285

  9. Designing a new type of neutron detector for neutron and gamma-ray discrimination via GEANT4.

    PubMed

    Shan, Qing; Chu, Shengnan; Ling, Yongsheng; Cai, Pingkun; Jia, Wenbao

    2016-04-01

    Design of a new type of neutron detector, consisting of a fast neutron converter, plastic scintillator, and Cherenkov detector, to discriminate 14-MeV fast neutrons and gamma rays in a pulsed n-γ mixed field and monitor their neutron fluxes is reported in this study. Both neutrons and gamma rays can produce fluorescence in the scintillator when they are incident on the detector. However, only the secondary charged particles of the gamma rays can produce Cherenkov light in the Cherenkov detector. The neutron and gamma-ray fluxes can be calculated by measuring the fluorescence and Cherenkov light. The GEANT4 Monte Carlo simulation toolkit is used to simulate the whole process occurring in the detector, whose optimum parameters are known. Analysis of the simulation results leads to a calculation method of neutron flux. This method is verified by calculating the neutron fluxes using pulsed n-γ mixed fields with different n/γ ratios, and the results show that the relative errors of all calculations are <5%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  11. Liquid phase mass production of air-stable black phosphorus/phospholipids nanocomposite with ultralow tunneling barrier

    NASA Astrophysics Data System (ADS)

    Zhang, Qiankun; Liu, Yinan; Lai, Jiawei; Qi, Shaomian; An, Chunhua; Lu, Yao; Duan, Xuexin; Pang, Wei; Zhang, Daihua; Sun, Dong; Chen, Jian-Hao; Liu, Jing

    2018-04-01

    Few-layer black phosphorus (FLBP), a recently discovered two-dimensional semiconductor, has attracted substantial attention in the scientific and technical communities due to its great potential in electronic and optoelectronic applications. However, reactivity of FLBP flakes with ambient species limits its direct applications. Among various methods to passivate FLBP in ambient environment, nanocomposites mixing FLBP flakes with stable matrix may be one of the most promising approaches for industry applications. Here, we report a simple one-step procedure to mass produce air-stable FLBP/phospholipids nanocomposite in liquid phase. The resultant nanocomposite is found to have ultralow tunneling barrier for charge carriers which can be described by an Efros-Shklovskii variable range hopping mechanism. Devices made from such mass-produced FLBP/phospholipids nanocomposite show highly stable electrical conductivity and opto-electrical response in ambient conditions, indicating its promising applications in both electronic and optoelectronic applications. This method could also be generalized to the mass production of nanocomposites consisting of other air-sensitive 2D materials, such as FeSe, NbSe2, WTe2, etc.

  12. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  13. Monolayer nanoparticle-covered liquid marbles derived from a sol-gel coating

    NASA Astrophysics Data System (ADS)

    Li, Xiaoguang; Wang, Yiqi; Huang, Junchao; Yang, Yao; Wang, Renxian; Geng, Xingguo; Zang, Duyang

    2017-12-01

    A sol-gel coating consisting of hydrophobic SiO2 nanoparticles (NPs) was used to produce monolayer NP-covered (mNPc) liquid marbles. The simplest approach was rolling a droplet on this coating, and an identifiable signet allowed determination of the coverage ratio of the resulting liquid marble. Alternatively, the particles were squeezed onto a droplet surface with two such coatings, generating surface buckling from interfacial NP jamming, and then a liquid marble was produced via a jamming-relief process in which water was added into the buckled droplet. This process revealed an ˜7% reduction in particle distance after interfacial jamming. The mNPc liquid marbles obtained by the two methods were transparent with smooth profiles, as naked droplets, and could be advantageously used in fundamental and applied researches for their unique functions.

  14. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    NASA Astrophysics Data System (ADS)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  15. Method and composition for testing for the presence of an alkali metal

    DOEpatents

    Guon, Jerold

    1981-01-01

    A method and composition for detecting the presence of an alkali metal on the surface of a body such as a metal plate, tank, pipe or the like is provided. The method comprises contacting the surface with a thin film of a liquid composition comprising a light-colored pigment, an acid-base indicator, and a nonionic wetting agent dispersed in a liquid carrier comprising a minor amount of water and a major amount of an organic solvent selected from the group consisting of the lower aliphatic alcohols, ketones and ethers. Any alkali metal present on the surface in elemental form or as an alkali metal hydroxide or alkali metal carbonate will react with the acid-base indicator to produce a contrasting color change in the thin film, which is readily discernible by visual observation or automatic techniques.

  16. Unity PF current-source rectifier based on dynamic trilogic PWM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao Wang; Boon-Teck Ooi

    1993-07-01

    One remaining step in perfecting the stand-along, unity power factor, regulated current-source PWM rectifier is to reduce cost, by bringing the 12-valve converter (consisting of three single-phase full bridges that operate with two-level or bilogic PWM) to the six-valve bridge. However, the six-valve topology requires a three-level or trilogic PWM strategy that can handle feedback signals. This feature was not available until now. The paper describes a general method of translating three-phase bilogic PWM signals to three-phase trilogic PWM signals. The method of translation retains the characteristics of the bilogic PWM, including the frequency bandwidth. Experiments show that the trilogicmore » PWM signals produced by the method can not only handle stabilizing feedback signals but also signals for active filtering.« less

  17. How Many Batches Are Needed for Process Validation under the New FDA Guidance?

    PubMed

    Yang, Harry

    2013-01-01

    The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.

  18. Preparation of novel carbon microfiber/carbon nanofiber-dispersed polyvinyl alcohol-based nanocomposite material for lithium-ion electrolyte battery separator.

    PubMed

    Sharma, Ajit K; Khare, Prateek; Singh, Jayant K; Verma, Nishith

    2013-04-01

    A novel nanocomposite polyvinyl alcohol precursor-based material dispersed with the web of carbon microfibers and carbon nanofibers is developed as lithium (Li)-ion electrolyte battery separator. The primary synthesis steps of the separator material consist of esterification of polyvinyl acetate to produce polyvinyl alcohol gel, ball-milling of the surfactant dispersed carbon micro-nanofibers, mixing of the milled micron size (~500 nm) fibers to the reactant mixture at the incipience of the polyvinyl alcohol gel formation, and the mixing of hydrophobic reagents along with polyethylene glycol as a plasticizer, to produce a thin film of ~25 μm. The produced film, uniformly dispersed with carbon micro-nanofibers, has dramatically improved performance as a battery separator, with the ion conductivity of the electrolytes (LiPF6) saturated film measured as 0.119 S-cm(-1), approximately two orders of magnitude higher than that of polyvinyl alcohol. The other primary characteristics of the produced film, such as tensile strength, contact angle, and thermal stability, are also found to be superior to the materials made of other precursors, including polypropylene and polyethylene, discussed in the literature. The method of producing the films in this study is novel, simple, environmentally benign, and economically viable. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Generation of a Chinese Hamster Ovary Cell Line Producing Recombinant Human Glucocerebrosidase

    PubMed Central

    Novo, Juliana Branco; Morganti, Ligia; Moro, Ana Maria; Paes Leme, Adriana Franco; Serrano, Solange Maria de Toledo; Raw, Isaias; Ho, Paulo Lee

    2012-01-01

    Impaired activity of the lysosomal enzyme glucocerebrosidase (GCR) results in the inherited metabolic disorder known as Gaucher disease. Current treatment consists of enzyme replacement therapy by administration of exogenous GCR. Although effective, it is exceptionally expensive, and patients worldwide have a limited access to this medicine. In Brazil, the public healthcare system provides the drug free of charge for all Gaucher's patients, which reaches the order of $ 84 million per year. However, the production of GCR by public institutions in Brazil would reduce significantly the therapy costs. Here, we describe a robust protocol for the generation of a cell line producing recombinant human GCR. The protein was expressed in CHO-DXB11 (dhfr−) cells after stable transfection and gene amplification with methotrexate. As expected, glycosylated GCR was detected by immunoblotting assay both as cell-associated (~64 and 59 kDa) and secreted (63–69 kDa) form. Analysis of subclones allowed the selection of stable CHO cells producing a secreted functional enzyme, with a calculated productivity of 5.14 pg/cell/day for the highest producer. Although being laborious, traditional methods of screening high-producing recombinant cells may represent a valuable alternative to generate expensive biopharmaceuticals in countries with limited resources. PMID:23091360

  20. Quality control considerations for size exclusion chromatography with online ICP-MS: a powerful tool for evaluating the size dependence of metal-organic matter complexation.

    PubMed

    McKenzie, Erica R; Young, Thomas M

    2013-01-01

    Size exclusion chromatography (SEC), which separates molecules based on molecular volume, can be coupled with online inductively coupled plasma mass spectrometry (ICP-MS) to explore size-dependent metal-natural organic matter (NOM) complexation. To make effective use of this analytical dual detector system, the operator should be mindful of quality control measures. Al, Cr, Fe, Se, and Sn all exhibited columnless attenuation, which indicated unintended interactions with system components. Based on signal-to-noise ratio and peak reproducibility between duplicate analyses of environmental samples, consistent peak time and height were observed for Mg, Cl, Mn, Cu, Br, and Pb. Al, V, Fe, Co, Ni, Zn, Se, Cd, Sn, and Sb were less consistent overall, but produced consistent measurements in select samples. Ultrafiltering and centrifuging produced similar peak distributions, but glass fiber filtration produced more high molecular weight (MW) peaks. Storage in glass also produced more high MW peaks than did plastic bottles.

Top