Zhang, Guodong; Brown, Eric W.; González-Escalona, Narjol
2011-01-01
Contamination of foods, especially produce, with Salmonella spp. is a major concern for public health. Several methods are available for the detection of Salmonella in produce, but their relative efficiency for detecting Salmonella in commonly consumed vegetables, often associated with outbreaks of food poisoning, needs to be confirmed. In this study, the effectiveness of three molecular methods for detection of Salmonella in six produce matrices was evaluated and compared to the FDA microbiological detection method. Samples of cilantro (coriander leaves), lettuce, parsley, spinach, tomato, and jalapeno pepper were inoculated with Salmonella serovars at two different levels (105 and <101 CFU/25 g of produce). The inoculated produce was assayed by the FDA Salmonella culture method (Bacteriological Analytical Manual) and by three molecular methods: quantitative real-time PCR (qPCR), quantitative reverse transcriptase real-time PCR (RT-qPCR), and loop-mediated isothermal amplification (LAMP). Comparable results were obtained by these four methods, which all detected as little as 2 CFU of Salmonella cells/25 g of produce. All control samples (not inoculated) were negative by the four methods. RT-qPCR detects only live Salmonella cells, obviating the danger of false-positive results from nonviable cells. False negatives (inhibition of either qPCR or RT-qPCR) were avoided by the use of either a DNA or an RNA amplification internal control (IAC). Compared to the conventional culture method, the qPCR, RT-qPCR, and LAMP assays allowed faster and equally accurate detection of Salmonella spp. in six high-risk produce commodities. PMID:21803916
Yamada, Kageto; Kashiwa, Machiko; Arai, Katsumi; Nagano, Noriyuki; Saito, Ryoichi
2016-09-01
We compared three screening methods for carbapenemase-producing Enterobacteriaceae. While the Modified-Hodge test and Carba NP test produced false-negative results for OXA-48-like and mucoid NDM producers, the carbapenem inactivation method (CIM) showed positive results for these isolates. Although the CIM required cultivation time, it is well suited for general clinical laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.
A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.
Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F
2015-10-15
A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.
A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays
Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.
2015-01-01
A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858
NASA Astrophysics Data System (ADS)
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Herbert, Wendy J; Davidson, Adam G; Buford, John A
2010-06-01
The pontomedullary reticular formation (PMRF) of the monkey produces motor outputs to both upper limbs. EMG effects evoked from stimulus-triggered averaging (StimulusTA) were compared with effects from stimulus trains to determine whether both stimulation methods produced comparable results. Flexor and extensor muscles of scapulothoracic, shoulder, elbow, and wrist joints were studied bilaterally in two male M. fascicularis monkeys trained to perform a bilateral reaching task. The frequency of facilitation versus suppression responses evoked in the muscles was compared between methods. Stimulus trains were more efficient (94% of PMRF sites) in producing responses than StimulusTA (55%), and stimulus trains evoked responses from more muscles per site than from StimulusTA. Facilitation (72%) was more common from stimulus trains than StimulusTA (39%). In the overall results, a bilateral reciprocal activation pattern of ipsilateral flexor and contralateral extensor facilitation was evident for StimulusTA and stimulus trains. When the comparison was restricted to cases where both methods produced a response in a given muscle from the same site, agreement was very high, at 80%. For the remaining 20%, discrepancies were accounted for mainly by facilitation from stimulus trains when StimulusTA produced suppression, which was in agreement with the under-representation of suppression in the stimulus train data as a whole. To the extent that the stimulus train method may favor transmission through polysynaptic pathways, these results suggest that polysynaptic pathways from the PMRF more often produce facilitation in muscles that would typically demonstrate suppression with StimulusTA.
A comparative rapid and sensitive method to screen l-asparaginase producing fungi.
Dhale, Mohan A; Mohan-Kumari, H Puttananjaiah
2014-07-01
Fungi are well known to produce various industrial enzymes and secondary metabolites with different colours. Fungi producing l-asparaginase enzyme are conventionally screened on medium containing phenol red (PR). The contrast between enzyme-hydrolysed zone and unhydrolysed l-asparagine is not very evident and distinct in medium containing PR and bromothymol blue (BB) due to coloured secondary metabolite production. Thus, PR and BB limit and affect the detection and screening method. In the present investigation, an improved method for screening is reported by comparing with PR and BB, wherein methyl red (MR) is incorporated as pH indicator. The enzyme activity was distinctly observed (red and light-yellow) in MR incorporated medium compared to PR and BB. Copyright © 2014 Elsevier B.V. All rights reserved.
Saub, R; Locker, D; Allison, P
2008-09-01
To compare two methods of developing short forms of the Malaysian Oral Health Impact Profile (OHIP-M) measure. Cross sectional data obtained using the long form of the OHIP-M was used to produce two types of OHIP-M short forms, derived using two different methods; namely regression and item frequency methods. The short version derived using a regression method is known as Reg-SOHIP(M) and that derived using a frequency method is known as Freq-SOHIP(M). Both short forms contained 14 items. These two forms were then compared in terms of their content, scores, reliability, validity and the ability to distinguish between groups. Out of 14 items, only four were in common. The form derived from the frequency method contained more high prevalence items and higher scores than the form derived from the regression method. Both methods produced a reliable and valid measure. However, the frequency method produced a measure, which was slightly better in terms of distinguishing between groups. Regardless of the method used to produce the measures, both forms performed equally well when tested for their cross-sectional psychometric properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vahtrus, Mikk; Šutka, Andris; Institute of Silicate Materials, Riga Technical University, P. Valdena 3/7, Riga LV-1048
2015-02-15
In this work TiO{sub 2} nanofibers produced by needle and needleless electrospinning processes from the same precursor were characterized and compared using Raman spectroscopy, transmission electron microscopy (TEM), scanning electron microscopy (SEM) and in situ SEM nanomechanical testing. Phase composition, morphology, Young's modulus and bending strength values were found. Weibull statistics was used to evaluate and compare uniformity of mechanical properties of nanofibers produced by two different methods. It is shown that both methods yield nanofibers with very similar properties. - Graphical abstract: Display Omitted - Highlights: • TiO{sub 2} nanofibers were produced by needle and needleless electrospinning processes. •more » Structure was studied by Raman spectroscopy and electron microscopy methods. • Mechanical properties were measured using advanced in situ SEM cantilevered beam bending technique. • Both methods yield nanofibers with very similar properties.« less
Moon, Jordan R; Hull, Holly R; Tobkin, Sarah E; Teramoto, Masaru; Karabulut, Murat; Roberts, Michael D; Ryan, Eric D; Kim, So Jung; Dalbo, Vincent J; Walter, Ashley A; Smith, Abbie T; Cramer, Joel T; Stout, Jeffrey R
2007-01-01
Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. This investigation sought to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age women compared to the Siri three-compartment model (3C). Methods Thirty Caucasian women (21.1 ± 1.5 yrs; 164.8 ± 4.7 cm; 61.2 ± 6.8 kg) had their %fat estimated by BIA using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), NIR (Futrex® 6100/XL), a quadratic (SF3JPW) and linear (SF3WB) skinfold equation, air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All methods produced acceptable total error (TE) values compared to the 3C model. Both laboratory methods produced similar TE values (HW, TE = 2.4%fat; BP, TE = 2.3%fat) when compared to the 3C model, though a significant constant error (CE) was detected for HW (1.5%fat, p ≤ 0.006). The field methods produced acceptable TE values ranging from 1.8 – 3.8 %fat. BIA-AK (TE = 1.8%fat) yielded the lowest TE among the field methods, while BIA-Lohman (TE = 2.1%fat) and NIR (TE = 2.7%fat) produced lower TE values than both skinfold equations (TE > 2.7%fat) compared to the 3C model. Additionally, the SF3JPW %fat estimation equation resulted in a significant CE (2.6%fat, p ≤ 0.007). Conclusion Data suggest that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian women. When the use of a laboratory method is not feasible, NIR, BIA-AK, BIA-Lohman, SF3JPW, and SF3WB are acceptable field methods to estimate %fat in this population. PMID:17988393
Sim, K S; Norhisham, S
2016-11-01
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Pugazhendhi, Sugandhi; Dorairaj, Arvind Prasanth
Diabetic patients are more prone to the development of foot ulcers, because their underlying tissues are exposed to colonization by various pathogenic organisms. Hence, biofilm formation plays a vital role in disease progression by antibiotic resistance to the pathogen found in foot infections. The present study has demonstrated the correlation of biofilm assay with the clinical characteristics of diabetic foot infection. The clinical characteristics such as the ulcer duration, size, nature, and grade were associated with biofilm production. Our results suggest that as the size of the ulcer with poor glycemic control increased, the organism was more likely to be positive for biofilm formation. A high-degree of antibiotic resistance was exhibited by the biofilm-producing gram-positive isolates for erythromycin and gram-negative isolates for cefpodoxime. Comparisons of biofilm production using 3 different conventional methods were performed. The strong producers with the tube adherence method were able to produce biofilm using the cover slip assay method, and the weak producers in tube adherence method had difficulty in producing biofilm using the other 2 methods, indicating that the tube adherence method is the best method for assessing biofilm formation. The strong production of biofilm with the conventional method was further confirmed by scanning electron microscopy analysis, because bacteria attached as a distinct layer of biofilm. Thus, the high degree of antibiotic resistance was exhibited by biofilm producers compared with nonbiofilm producers. The tube adherence and cover slip assay were found to be the better method for biofilm evaluation. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Biofilm formation by strains of Leuconostoc citreum and L. mesenteroides
USDA-ARS?s Scientific Manuscript database
Aims: To compare for the first time biofilm formation among strains of Leuconostoc citreum and L. mesenteroides that produce varying types of extracellular glucans. Methods and Results: Twelve strains of Leuconostoc sp. that produce extracellular glucans were compared for their capacity to produ...
Roger W. Perry; Ronald E. Thill; Philip A. Tappe; David G. Peitz
2004-01-01
Abstract - Recent policy changes have eliminated clearcutting as the primary pine regeneration method on Federal lands in the Southern United States. However, the effects of alternative natural regeneration methods on soft mast production are unknown. We compared plant coverage and mast production of 37 soft mast-producing plants among four...
Hydrocarbonaceous material processing methods and apparatus
Brecher, Lee E [Laramie, WY
2011-07-12
Methods and apparatus are disclosed for possibly producing pipeline-ready heavy oil from substantially non-pumpable oil feeds. The methods and apparatus may be designed to produce such pipeline-ready heavy oils in the production field. Such methods and apparatus may involve thermal soaking of liquid hydrocarbonaceous inputs in thermal environments (2) to generate, though chemical reaction, an increased distillate amount as compared with conventional boiling technologies.
Method for forming precision clockplate with pivot pins
Wild, Ronald L [Albuquerque, NM
2010-06-01
Methods are disclosed for producing a precision clockplate with rotational bearing surfaces (e.g. pivot pins). The methods comprise providing an electrically conductive blank, conventionally machining oversize features comprising bearing surfaces into the blank, optionally machining of a relief on non-bearing surfaces, providing wire accesses adjacent to bearing surfaces, threading the wire of an electrical discharge machine through the accesses and finishing the bearing surfaces by wire electrical discharge machining. The methods have been shown to produce bearing surfaces of comparable dimension and tolerances as those produced by micro-machining methods such as LIGA, at reduced cost and complexity.
Comparison of RNA Isolation Methods From Insect Larvae
Ridgeway, J. A.; Timm, A. E.
2014-01-01
Abstract Isolating RNA from insects is becoming increasingly important in molecular entomology. Four methods including three commercial kits RNeasy Mini Kit (Qiagen), SV Total RNA isolation system (Promega), TRIzol reagent (Invitrogen), and a cetyl trimethylammonium bromide (CTAB)-based method were compared regarding their ability to isolate RNA from whole-body larvae of Thaumatotibia leucotreta (Meyrick), Thanatophilus micans (F.), Plutella xylostella (L.), and Tenebrio molitor (L.). A difference was observed among the four methods regarding RNA quality but not quantity. However, RNA quality and quantity obtained was not dependent on the insect species. The CTAB-based method produced low-quality RNA and the Trizol reagent produced partially degraded RNA, whereas the RNeasy Mini Kit and SV Total RNA isolation system produced RNA of consistently high quality. However, after reverse transcription to cDNA, RNA produced using all four extraction methods could be used to successfully amplify a 708 bp fragment of the cytochrome oxidase I gene. Of the four methods, the SV Total RNA isolation system showed the least amount of DNA contamination with the highest RNA integrity number and is thus recommended for stringent applications where high-quality RNA is required. This is the first comparison of RNA isolation methods among different insect species and the first to compare RNA isolation methods in insects in the last 20 years. PMID:25527580
Method for producing electricity using a platinum-ruthenium-palladium catalyst in a fuel cell
Gorer, Alexander
2004-01-27
A method for producing electricity using a fuel cell that utilizes a ternary alloy composition as a fuel cell catalyst, the ternary alloy composition containing platinum, ruthenium and palladium. The alloy shows increased activity as compared to well-known catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serate, Jose; Xie, Dan; Pohlmann, Edward
Microbial conversion of lignocellulosic feedstocks into biofuels remains an attractive means to produce sustainable energy. It is essential to produce lignocellulosic hydrolysates in a consistent manner in order to study microbial performance in different feedstock hydrolysates. Because of the potential to introduce microbial contamination from the untreated biomass or at various points during the process, it can be difficult to control sterility during hydrolysate production. In this study, we compared hydrolysates produced from AFEX-pretreated corn stover and switchgrass using two different methods to control contamination: either by autoclaving the pretreated feedstocks prior to enzymatic hydrolysis, or by introducing antibiotics duringmore » the hydrolysis of non-autoclaved feedstocks. We then performed extensive chemical analysis, chemical genomics, and comparative fermentations to evaluate any differences between these two different methods used for producing corn stover and switchgrass hydrolysates. Autoclaving the pretreated feedstocks could eliminate the contamination for a variety of feedstocks, whereas the antibiotic gentamicin was unable to control contamination consistently during hydrolysis. Compared to the addition of gentamicin, autoclaving of biomass before hydrolysis had a minimal effect on mineral concentrations, and showed no significant effect on the two major sugars (glucose and xylose) found in these hydrolysates. However, autoclaving elevated the concentration of some furanic and phenolic compounds. Chemical genomics analyses using Saccharomyces cerevisiae strains indicated a high correlation between the AFEX-pretreated hydrolysates produced using these two methods within the same feedstock, indicating minimal differences between the autoclaving and antibiotic methods. Comparative fermentations with S. cerevisiae and Zymomonas mobilis also showed that autoclaving the AFEX-pretreated feedstocks had no significant effects on microbial performance in these hydrolysates. In conclusion, our results showed that autoclaving the pretreated feedstocks offered advantages over the addition of antibiotics for hydrolysate production. The autoclaving method produced a more consistent quality of hydrolysate.« less
Serate, Jose; Xie, Dan; Pohlmann, Edward; ...
2015-11-14
Microbial conversion of lignocellulosic feedstocks into biofuels remains an attractive means to produce sustainable energy. It is essential to produce lignocellulosic hydrolysates in a consistent manner in order to study microbial performance in different feedstock hydrolysates. Because of the potential to introduce microbial contamination from the untreated biomass or at various points during the process, it can be difficult to control sterility during hydrolysate production. In this study, we compared hydrolysates produced from AFEX-pretreated corn stover and switchgrass using two different methods to control contamination: either by autoclaving the pretreated feedstocks prior to enzymatic hydrolysis, or by introducing antibiotics duringmore » the hydrolysis of non-autoclaved feedstocks. We then performed extensive chemical analysis, chemical genomics, and comparative fermentations to evaluate any differences between these two different methods used for producing corn stover and switchgrass hydrolysates. Autoclaving the pretreated feedstocks could eliminate the contamination for a variety of feedstocks, whereas the antibiotic gentamicin was unable to control contamination consistently during hydrolysis. Compared to the addition of gentamicin, autoclaving of biomass before hydrolysis had a minimal effect on mineral concentrations, and showed no significant effect on the two major sugars (glucose and xylose) found in these hydrolysates. However, autoclaving elevated the concentration of some furanic and phenolic compounds. Chemical genomics analyses using Saccharomyces cerevisiae strains indicated a high correlation between the AFEX-pretreated hydrolysates produced using these two methods within the same feedstock, indicating minimal differences between the autoclaving and antibiotic methods. Comparative fermentations with S. cerevisiae and Zymomonas mobilis also showed that autoclaving the AFEX-pretreated feedstocks had no significant effects on microbial performance in these hydrolysates. In conclusion, our results showed that autoclaving the pretreated feedstocks offered advantages over the addition of antibiotics for hydrolysate production. The autoclaving method produced a more consistent quality of hydrolysate.« less
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju
2018-06-01
To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Data for prediction of mechanical properties of aspen flakeboards
C. G. Carll; P. Wang
1983-01-01
This research compared two methods of producing flakeboards with uniform density distribution (which could then be used to predict bending properties of flakeboards with density gradients). One of the methods was suspected of producing weak boards because it involved exertion of high pressures on cold mats. Although differences were found in mechanical properties of...
Gambarini, Gianluca; Grande, Nicola Maria; Plotino, Gianluca; Somma, Francesco; Garala, Manish; De Luca, Massimo; Testarelli, Luca
2008-08-01
The aim of the present study was to investigate whether cyclic fatigue resistance is increased for nickel-titanium instruments manufactured by using new processes. This was evaluated by comparing instruments produced by using the twisted method (TF; SybronEndo, Orange, CA) and those using the M-wire alloy (GTX; Dentsply Tulsa-Dental Specialties, Tulsa, OK) with instruments produced by a traditional NiTi grinding process (K3, SybronEndo). Tests were performed with a specific cyclic fatigue device that evaluated cycles to failure of rotary instruments inside curved artificial canals. Results indicated that size 06-25 TF instruments showed a significant increase (p < 0.05) in the mean number of cycles to failure when compared with size 06-25 K3 files. Size 06-20 K3 instruments showed no significant increase (p > 0.05) in the mean number of cycles to failure when compared with size 06-20 GT series X instruments. The new manufacturing process produced nickel-titanium rotary files (TF) significantly more resistant to fatigue than instruments produced with the traditional NiTi grinding process. Instruments produced with M-wire (GTX) were not found to be more resistant to fatigue than instruments produced with the traditional NiTi grinding process.
Method for producing uranium atomic beam source
Krikorian, Oscar H.
1976-06-15
A method for producing a beam of neutral uranium atoms is obtained by vaporizing uranium from a compound UM.sub.x heated to produce U vapor from an M boat or from some other suitable refractory container such as a tungsten boat, where M is a metal whose vapor pressure is negligible compared to that of uranium at the vaporization temperature. The compound, for example, may be the uranium-rhenium compound, URe.sub.2. An evaporation rate in excess of about 10 times that of conventional uranium beam sources is produced.
Effect of synthesis methods on the Ca{sub 3}Co{sub 4}O{sub 9} thermoelectric ceramic performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sotelo, A.; Rasekh, Sh.; Torres, M.A.
2015-01-15
Three different synthesis methods producing nanometric grain sizes, coprecipitation with ammonium carbonate, oxalic acid, and by attrition milling have been studied to produce Ca{sub 3}Co{sub 4}O{sub 9} ceramics and compared with the classical solid state route. These three processes have produced high reactive precursors and all the organic material and CaCO{sub 3}·have been decomposed in a single thermal treatment. Coprecipitation leads to pure Ca{sub 3}Co{sub 4}O{sub 9} phase, while attrition milling and classical solid state produce small amounts of Ca{sub 3}Co{sub 2}O{sub 6} secondary phase. Power factor values are similar for all three samples, being slightly lower for the onesmore » produced by attrition milling. These values are much higher than the obtained in samples prepared by the classical solid state method, used as reference. The maximum power factor values determined at 800 °C (∼0.43 mW/K{sup 2} m) are slightly higher than the best reported values obtained in textured ones which also show much higher density values. - Graphical abstract: Impressive raise of PF in Ca{sub 3}Co{sub 4}O{sub 9} thermoelectric materials obtained from nanometric grains. - Highlights: • Ca{sub 3}Co{sub 4}O{sub 9} has been produced by four different methods. • Precursors particle sizes influences on the final performances. • Coprecipitation methods produce single Ca{sub 3}Co{sub 4}O{sub 9} phase. • Power factor reaches values comparable to high density textured materials.« less
ERIC Educational Resources Information Center
Goldfinch, Judy
1996-01-01
A study compared the effectiveness of two methods (medium-size class instruction and large lectures with tutorial sessions) for teaching mathematics and statistics to first-year business students. Students and teachers overwhelmingly preferred the medium-size class method, which produced higher exam scores but had no significant effect on…
Production Methods in Industrial Microbiology.
ERIC Educational Resources Information Center
Gaden, Elmer L., Jr.
1981-01-01
Compares two methods (batch and continuous) in which microorganisms are used to produce industrial chemicals. Describes batch and continuous stirred-tank reactors and offers reasons why the batch method may be preferred. (JN)
Schubert, Peter; Culibrk, Brankica; Karwal, Simrath; Serrano, Katherine; Levin, Elena; Bu, Daniel; Bhakta, Varsha; Sheffield, William P; Goodrich, Raymond P; Devine, Dana V
2015-04-01
Pathogen inactivation (PI) technologies are currently licensed for use with platelet (PLT) and plasma components. Treatment of whole blood (WB) would be of benefit to the blood banking community by saving time and costs compared to individual component treatment. However, no paired, pool-and-split study directly assessing the impact of WB PI on the subsequently produced components has yet been reported. In a "pool-and-split" study, WB either was treated with riboflavin and ultraviolet (UV) light or was kept untreated as control. The buffy coat (BC) method produced plasma, PLT, and red blood cell (RBC) components. PLT units arising from the untreated WB study arm were treated with riboflavin and UV light on day of production and compared to PLT concentrates (PCs) produced from the treated WB units. A panel of common in vitro variables for the three types of components was used to monitor quality throughout their respective storage periods. PCs derived from the WB PI treatment were of significantly better quality than treated PLT components for most variables. RBCs produced from the WB treatment deteriorated earlier during storage than untreated units. Plasma components showed a 3% to 44% loss in activity for several clotting factors. Treatment of WB with riboflavin and UV before production of components by the BC method shows a negative impact on all three blood components. PLT units produced from PI-treated WB exhibited less damage compared to PLT component treatment. © 2014 AABB.
Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R
2008-01-01
Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Methods Thirty-one Caucasian men (22.5 ± 2.7 yrs; 175.6 ± 6.3 cm; 76.4 ± 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex® 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Conclusion Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population. PMID:18426582
2010-01-01
produced by Pseudomonas fluorescens [19] Inhibition of RNA and protein synthesis by targeting the isoleucine-binding site on the isoleucyl-transfer-RNA...multidrug-resistant (MDR) bacteria. We compared two methods of determining topical antimicrobial susceptibilities. Methods: Isolates of Pseudomonas ...aeruginosa, methicillin-resistant Staphylococcus aureus (MRSA), extended spectrum beta-lactamase (ESBL) producing Klebsiella pneumoniae, and
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R
2008-04-21
Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Thirty-one Caucasian men (22.5 +/- 2.7 yrs; 175.6 +/- 6.3 cm; 76.4 +/- 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex(R) 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
An effective hair detection algorithm for dermoscopic melanoma images of skin lesions
NASA Astrophysics Data System (ADS)
Chakraborti, Damayanti; Kaur, Ravneet; Umbaugh, Scott; LeAnder, Robert
2016-09-01
Dermoscopic images are obtained using the method of skin surface microscopy. Pigmented skin lesions are evaluated in terms of texture features such as color and structure. Artifacts, such as hairs, bubbles, black frames, ruler-marks, etc., create obstacles that prevent accurate detection of skin lesions by both clinicians and computer-aided diagnosis. In this article, we propose a new algorithm for the automated detection of hairs, using an adaptive, Canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. The algorithm was applied to 50 dermoscopic melanoma images. In order to ascertain this method's relative detection accuracy, it was compared to the Razmjooy hair-detection method [1], using segmentation error (SE), true detection rate (TDR) and false positioning rate (FPR). The new method produced 6.57% SE, 96.28% TDR and 3.47% FPR, compared to 15.751% SE, 86.29% TDR and 11.74% FPR produced by the Razmjooy method [1]. Because of the 7.27-9.99% improvement in those parameters, we conclude that the new algorithm produces much better results for detecting thick, thin, dark and light hairs. The new method proposed here, shows an appreciable difference in the rate of detecting bubbles, as well.
Comparative method of protein expression and isolation of EBV epitope in E.coli DH5α
NASA Astrophysics Data System (ADS)
Anyndita, Nadya V. M.; Dluha, Nurul; Himmah, Karimatul; Rifa'i, Muhaimin; Widodo
2017-11-01
Epstein-Barr Virus (EBV) or human herpes virus 4 (HHV-4) is a virus that infects human B cell and leads to nasopharyngeal carcinoma (NPC). The prevention of this disease remains unsuccessful since the vaccine has not been discovered. The objective of this study is to over-produce EBV gp350/220 epitope using several methods in E.coli DH5α. EBV epitope sequences were inserted into pMAL-p5x vector, then transformed into DH5α E.coli and over-produced using 0.3, 1 and 2 mM IPTG. Plasmid transformation was validated using AflIII restriction enzyme in 0.8% agarose. Periplasmic protein was isolated using 2 comparative methods and then analyzed using SDS-PAGE. Method A produced a protein band around 50 kDa and appeared only at transformant. Method B failed to isolate the protein, indicated by no protein band appearing. In addition, any variations in IPTG concentration didn't give a different result. Thus it can be concluded that even the lowest IPTG concentration is able to induce protein expression.
Pretreatment of aqueous ammonia on oil palm empty fruit fiber (OPEFB) in production of sugar
NASA Astrophysics Data System (ADS)
Zulkiple, Nursyafiqah; Maskat, Mohamad Yusof; Hassan, Osman
2015-09-01
Oil Palm Empty Fruit Bunch (OPEFB) is an agricultural residue that has the potential to become a good source for renewable feedstock for production of sugar. This work evaluated the effectiveness of aqueous ammonia as pretreatment at low (soaking, SAA) and elevated temperature (pressurized chamber) to deconstruct the lignocellulosic feedstock, prior to enzymatic hydrolysis. The ammonia pretreatments were compared against the standard NaOH method. The best tested pressurized chamber method conditions were at 100°C with 3 hour retention time, 12.5% ammonium hydroxide and 1:30 solid loading. The digestibility of the feedstock is determined with enzymatic hydrolysis using Cellic Ctech2 and Cellic Htech2. The sugars produced by pressurized chamber method within 24 hour of enzyme hydrolysis are similar to that produced by NaOH method which is 439.90 mg/ml and 351.61 mg/ml, respectively. Compared with optimum SAA method (24 hour, 6.25% of ammonium hydroxide at room temperature), pressurized chamber method was capable of producing enhanced delignification and higher production of sugar upon hydrolysis. These findings were supported by the disappearance peak at 1732, 1512 and 1243 on Fourier Transform Infrared (FTIR spectrum) of treated OPEFB by pressurized chamber method. XRD determination showed reduced crystallinity of OPEFB (37.23%) after treatment by pressurized chamber, suggesting higher accessibility toward enzyme hydrolysis. The data obtained suggest that the pressurized chamber pre-treatment method are suitable for OPEFB deconstruction to produce high yield of sugar.
Bioactive lipids in the butter production chain from Parmigiano Reggiano cheese area.
Verardo, Vito; Gómez-Caravaca, Ana M; Gori, Alessandro; Losi, Giuseppe; Caboni, Maria F
2013-11-01
Bovine milk contains hundreds of diverse components, including proteins, peptides, amino acids, lipids, lactose, vitamins and minerals. Specifically, the lipid composition is influenced by different variables such as breed, feed and technological process. In this study the fatty acid and phospholipid compositions of different samples of butter and its by-products from the Parmigiano Reggiano cheese area, produced by industrial and traditional churning processes, were determined. The fatty acid composition of samples manufactured by the traditional method showed higher levels of monounsaturated and polyunsaturated fatty acids compared with industrial samples. In particular, the contents of n-3 fatty acids and conjugated linoleic acids were higher in samples produced by the traditional method than in samples produced industrially. Sample phospholipid composition also varied between the two technological processes. Phosphatidylethanolamine was the major phospholipid in cream, butter and buttermilk samples obtained by the industrial process as well as in cream and buttermilk samples from the traditional process, while phosphatidylcholine was the major phospholipid in traditionally produced butter. This result may be explained by the different churning processes causing different types of membrane disruption. Generally, samples produced traditionally had higher contents of total phospholipids; in particular, butter produced by the traditional method had a total phospholipid content 33% higher than that of industrially produced butter. The samples studied represent the two types of products present in the Parmigiano Reggiano cheese area, where the industrial churning process is widespread compared with the traditional processing of Reggiana cow's milk. This is because Reggiana cow's milk production is lower than that of other breeds and the traditional churning process is time-consuming and economically disadvantageous. However, its products have been demonstrated to contain more bioactive lipids compared with products obtained from other breeds and by the industrial process. © 2013 Society of Chemical Industry.
Aluminum transfer method for plating plastics
NASA Technical Reports Server (NTRS)
Goodrich, W. D.; Stalmach, C. J., Jr.
1977-01-01
Electroless plating technique produces plate of uniform thickness. Hardness and abrasion resistance can be increased further by heat treatment. Method results in seamless coating over many materials, has low thermal conductivity, and is relatively inexpensive compared to conventional methods.
Physics-based signal processing algorithms for micromachined cantilever arrays
Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W
2013-11-19
A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Targeted Single-Shot Methods for Diffusion-Weighted Imaging in the Kidneys
Jin, Ning; Deng, Jie; Zhang, Longjiang; Zhang, Zhuoli; Lu, Guangming; Omary, Reed A.; Larson, Andrew C.
2011-01-01
Purpose To investigate the feasibility of combining the inner-volume-imaging (IVI) technique with single-shot diffusion-weighted (DW) spin-echo echo-planar imaging (SE-EPI) and DW-SPLICE (split acquisition of fast spin-echo) sequences for renal DW imaging. Materials and Methods Renal DW imaging was performed in 10 healthy volunteers using single-shot DW-SE-EPI, DW-SPLICE, targeted-DW-SE-EPI and targeted-DW-SPLICE. We compared the quantitative diffusion measurement accuracy and image quality of these targeted-DW-SE-EPI and targeted DW-SPLICE methods with conventional full FOV DW-SE-EPI and DW-SPLICE measurements in phantoms and normal volunteers. Results Compared with full FOV DW-SE-EPI and DW-SPLICE methods, targeted-DW-SE-EPI and targeted-DW-SPLICE approaches produced images of superior overall quality with fewer artifacts, less distortion and reduced spatial blurring in both phantom and volunteer studies. The ADC values measured with each of the four methods were similar and in agreement with previously published data. There were no statistically significant differences between the ADC values and intra-voxel incoherent motion (IVIM) measurements in the kidney cortex and medulla using single-shot DW-SE-EPI, targeted-DW-EPI and targeted-DW-SPLICE (p > 0.05). Conclusion Compared with full-FOV DW imaging methods, targeted-DW-SE-EPI and targeted-DW-SPLICE techniques reduced image distortion and artifacts observed in the single-shot DW-SE-EPI images, reduced blurring in DW-SPLICE images and produced comparable quantitative DW and IVIM measurements to those produced with conventional full-FOV approaches. PMID:21591023
Lu, Zhen; McKellop, Harry A
2014-03-01
This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.
Inspection system calibration methods
Deason, Vance A.; Telschow, Kenneth L.
2004-12-28
An inspection system calibration method includes producing two sideband signals of a first wavefront; interfering the two sideband signals in a photorefractive material, producing an output signal therefrom having a frequency and a magnitude; and producing a phase modulated operational signal having a frequency different from the output signal frequency, a magnitude, and a phase modulation amplitude. The method includes determining a ratio of the operational signal magnitude to the output signal magnitude, determining a ratio of a 1st order Bessel function of the operational signal phase modulation amplitude to a 0th order Bessel function of the operational signal phase modulation amplitude, and comparing the magnitude ratio to the Bessel function ratio.
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
Comparing Models and Methods for the Delineation of Stream Baseflow Contribution Areas
NASA Astrophysics Data System (ADS)
Chow, R.; Frind, M.; Frind, E. O.; Jones, J. P.; Sousa, M.; Rudolph, D. L.; Nowak, W.
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to parameter non-uniqueness, discretization schemes, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternate approach that provides probability intervals for the baseflow contribution areas. In situations where the two approaches agree, the confidence in the delineation is reinforced.
An Easy Method for Preparing Presentation Slides.
ERIC Educational Resources Information Center
Wright, Norman A.; Blevins, Dennis D.
1984-01-01
Describes a simplified method of preparing 35mm projection slides with a minimum of equipment and expertise. The quality of these slides compares favorably to professionally produced diazo slides. Twenty-five slides can easily be prepared in less than three hours. Material cost per slide is comparable to professional color slide processing. (JN)
Monotonically improving approximate answers to relational algebra queries
NASA Technical Reports Server (NTRS)
Smith, Kenneth P.; Liu, J. W. S.
1989-01-01
We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.
Microwave-assisted routes for rapid and efficient modification of layered perovskites.
Akbarian-Tefaghi, S; Wiley, J B
2018-02-27
Recent advances in exploiting microwave radiation in the topochemical modification of layered oxide perovskites are presented. Such methods work well for rapid bulk synthetic steps used in the production of novel inorganic-organic hybrids (protonation, grafting, intercalation, and in situ click reactions), exfoliation to produce dispersed nanosheets, and post-exfoliation processing to rapidly vary nanosheet surface groups. Compared to traditional methods that often take days, microwave methods can produce quality products in as little as 1-2 h.
High performance bonded neo magnets using high density compaction
NASA Astrophysics Data System (ADS)
Herchenroeder, J.; Miller, D.; Sheth, N. K.; Foo, M. C.; Nagarathnam, K.
2011-04-01
This paper presents a manufacturing method called Combustion Driven Compaction (CDC) for the manufacture of isotropic bonded NdFeB magnets (bonded Neo). Magnets produced by the CDC method have density up to 6.5 g/cm3 which is 7-10% higher compared to commercially available bonded Neo magnets of the same shape. The performance of an actual seat motor with a representative CDC ring magnet is presented and compared with the seat motor performance with both commercial isotropic bonded Neo and anisotropic NdFeB rings of the same geometry. The comparisons are made at both room and elevated temperatures. The airgap flux for the magnet produced by the proposed method is 6% more compared to the commercial isotropic bonded Neo magnet. After exposure to high temperature due to the superior thermal aging stability of isotropic NdFeB powders the motor performance with this material is comparable to the motor performance with an anisotropic NdFeB magnet.
NASA Astrophysics Data System (ADS)
Ketut, Caturwati Ni; Agung, Sudrajat; Mekro, Permana; Heri, Haryanto; Bachtiar
2018-01-01
Increasing the volume of waste, especially in urban areas is a source of problems in realizing the comfort and health of the environment. It needs to do a good handling of garbage so as to provide benefits for the whole community. Organic waste processing through bio-digester method to produce a biogas as an energy source is an effort. This research was conducted to test the characteristics of biogas flame generated from organic waste processing through digester with various of the starter such as: cow dung, goat manure, and leachate that obtained from the landfill at Bagendung-Cilegon. The flame height and maximum temperature of the flame are measured for the same pressure of biogas. The measurements showed the flame produced by bio-digester with leachate starter has the lowest flame height compared to the other types of biogas, and the highest flame height is given by biogas from digester with cow dung as a starter. The maximum flame temperature of biogas produced by leachate as a starter reaches 1027 °C. This value is 7% lower than the maximum flame temperature of biogas produced by cow dung as a starter. Cow dung was observed to be the best starter compared to goat manure and leachate, but the use of leachate as a starter in producing biogas with biodigester method is not the best but it worked.
A Comparison of Component and Factor Patterns: A Monte Carlo Approach.
ERIC Educational Resources Information Center
Velicer, Wayne F.; And Others
1982-01-01
Factor analysis, image analysis, and principal component analysis are compared with respect to the factor patterns they would produce under various conditions. The general conclusion that is reached is that the three methods produce results that are equivalent. (Author/JKS)
Lee, C J; Park, J H; Ciesielski, T E; Thomson, J G; Persing, J A
2008-11-01
A variety of new methods for treating photoaging have been recently introduced. There has been increasing interest in comparing the relative efficacy of multiple methods for photoaging. However, the efficacy of a single method is difficult to assess from the data reported in the literature. Photoaged hairless mice were randomly divided into seven treatment groups: control, retinoids (tretinoin and adapalene), lasers (585 nm and CO(2)), and combination groups (585 nm + adapalene and CO(2 )+ adapalene). Biopsies were taken from the treated regions, and the results were analyzed based on the repair zone. The repair zones of the various methods for photoaging were compared. Retinoids produced a wider repair zone than the control condition. The 585-nm and CO(2) laser resurfacing produced a result equivalent to that of the control condition. A combination of these lasers with adapalene produced a wider repair zone than the lasers alone, but the combination produced a result equivalent to that of adapalene alone. Retinoids are potent stimuli for neocollagen formation. The 585-nm or CO(2) laser alone did not induce more neocollagen than the control condition. In addition, no synergistic effect was observed with the combination treatments. The repair zone of the combination treatment is mainly attributable to adapalene.
Himanshu, H; Voelklein, M A; Murphy, J D; Grant, J; O'Kiely, P
2017-08-01
The manual manometric biochemical methane potential (mBMP) test uses the increase in pressure to calculate the gas produced. This gas production may be affected by the headspace volume in the incubation bottle and by the overhead pressure measurement and release (OHPMR) frequency. The biogas and methane yields of cellulose, barley, silage and slurry were compared with three incubation bottle headspace volumes (50, 90 and 180ml; constant 70ml total medium) and four OHPMR frequencies (daily, each third day, weekly and solely at the end of experiment). The methane yields of barley, silage and slurry were compared with those from an automated volumetric method (AMPTS). Headspace volume and OHPMR frequency effects on biogas yield were mediated mainly through headspace pressure, with the latter having a negative effect on the biogas yield measured and relatively little effect on methane yield. Two mBMP treatments produced methane yields equivalent to AMPTS. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Pei; Fang, Z. Zak; Zhang, Ying; Xia, Yang
2017-12-01
Commercial spherical Ti powders for additive manufacturing applications are produced today by melt-atomization methods at relatively high costs. A meltless production method, called granulation-sintering-deoxygenation (GSD), was developed recently to produce spherical Ti alloy powder at a significantly reduced cost. In this new process, fine hydrogenated Ti particles are agglomerated to form spherical granules, which are then sintered to dense spherical particles. After sintering, the solid fully dense spherical Ti alloy particles are deoxygenated using novel low-temperature deoxygenation processes with either Mg or Ca. This technical communication presents results of 3D printing using GSD powder and the selective laser melting (SLM) technique. The results showed that tensile properties of parts fabricated from spherical GSD Ti-6Al-4V powder by SLM are comparable with typical mill-annealed Ti-6Al-4V. The characteristics of 3D printed Ti-6Al-4V from GSD powder are also compared with that of commercial materials.
Shen, L; Levine, S H; Catchen, G L
1987-07-01
This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.
Comparison of risk assessment procedures used in OCRA and ULRA methods
Roman-Liu, Danuta; Groborz, Anna; Tokarski, Tomasz
2013-01-01
The aim of this study was to analyse the convergence of two methods by comparing exposure and the assessed risk of developing musculoskeletal disorders at 18 repetitive task workstations. The already established occupational repetitive actions (OCRA) and the recently developed upper limb risk assessment (ULRA) produce correlated results (R = 0.84, p = 0.0001). A discussion of the factors that influence the values of the OCRA index and ULRA's repetitive task indicator shows that both similarities and differences in the results produced by the two methods can arise from the concepts that underlie them. The assessment procedure and mathematical calculations that the basic parameters are subjected to are crucial to the results of risk assessment. The way the basic parameters are defined influences the assessment of exposure and risk assessment to a lesser degree. The analysis also proved that not always do great differences in load indicator values result in differences in risk zones. Practitioner Summary: We focused on comparing methods that, even though based on different concepts, serve the same purpose. The results proved that different methods with different assumptions can produce similar assessment of upper limb load; sharp criteria in risk assessment are not the best solution. PMID:24041375
Children's Perception of Speech Produced in a Two-Talker Background
ERIC Educational Resources Information Center
Baker, Mallory; Buss, Emily; Jacks, Adam; Taylor, Crystal; Leibold, Lori J.
2014-01-01
Purpose: This study evaluated the degree to which children benefit from the acoustic modifications made by talkers when they produce speech in noise. Method: A repeated measures design compared the speech perception performance of children (5-11 years) and adults in a 2-talker masker. Target speech was produced in a 2-talker background or in…
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Abildgaard, Anders; Tovbjerg, Sara K; Giltay, Axel; Detemmerman, Liselot; Nissen, Peter H
2018-03-26
The lactase persistence phenotype is controlled by a regulatory enhancer region upstream of the Lactase (LCT) gene. In northern Europe, specifically the -13910C > T variant has been associated with lactase persistence whereas other persistence variants, e.g. -13907C > G and -13915 T > G, have been identified in Africa and the Middle East. The aim of the present study was to compare a previously developed high resolution melting assay (HRM) with a novel method based on loop-mediated isothermal amplification and melting curve analysis (LAMP-MC) with both whole blood and DNA as input material. To evaluate the LAMP-MC method, we used 100 whole blood samples and 93 DNA samples in a two tiered study. First, we studied the ability of the LAMP-MC method to produce specific melting curves for several variants of the LCT enhancer region. Next, we performed a blinded comparison between the LAMP-MC method and our existing HRM method with clinical samples of unknown genotype. The LAMP-MC method produced specific melting curves for the variants at position -13909, -13910, -13913 whereas the -13907C > G and -13915 T > G variants produced indistinguishable melting profiles. The LAMP-MC assay is a simple method for lactase persistence genotyping and compares well with our existing HRM method. Copyright © 2018. Published by Elsevier B.V.
Effects of cooking method, cooking oil, and food type on aldehyde emissions in cooking oil fumes.
Peng, Chiung-Yu; Lan, Cheng-Hang; Lin, Pei-Chen; Kuo, Yi-Chun
2017-02-15
Cooking oil fumes (COFs) contain a mixture of chemicals. Of all chemicals, aldehydes draw a great attention since several of them are considered carcinogenic and formation of long-chain aldehydes is related to fatty acids in cooking oils. The objectives of this research were to compare aldehyde compositions and concentrations in COFs produced by different cooking oils, cooking methods, and food types and to suggest better cooking practices. This study compared aldehydes in COFs produced using four cooking oils (palm oil, rapeseed oil, sunflower oil, and soybean oil), three cooking methods (stir frying, pan frying, and deep frying), and two foods (potato and pork loin) in a typical kitchen. Results showed the highest total aldehyde emissions in cooking methods were produced by deep frying, followed by pan frying then by stir frying. Sunflower oil had the highest emissions of total aldehydes, regardless of cooking method and food type whereas rapeseed oil and palm oil had relatively lower emissions. This study suggests that using gentle cooking methods (e.g., stir frying) and using oils low in unsaturated fatty acids (e.g., palm oil or rapeseed oil) can reduce the production of aldehydes in COFs, especially long-chain aldehydes such as hexanal and t,t-2,4-DDE. Copyright © 2016 Elsevier B.V. All rights reserved.
Delineating baseflow contribution areas for streams - A model and methods comparison
NASA Astrophysics Data System (ADS)
Chow, Reynold; Frind, Michael E.; Frind, Emil O.; Jones, Jon P.; Sousa, Marcelo R.; Rudolph, David L.; Molson, John W.; Nowak, Wolfgang
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome.
Geometrically derived difference formulae for the numerical integration of trajectory problems
NASA Technical Reports Server (NTRS)
Mcleod, R. J. Y.; Sanz-Serna, J. M.
1981-01-01
The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.
Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging
Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.
2014-01-01
Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602
Graphical method for comparative statistical study of vaccine potency tests.
Pay, T W; Hingley, P J
1984-03-01
Producers and consumers are interested in some of the intrinsic characteristics of vaccine potency assays for the comparative evaluation of suitable experimental design. A graphical method is developed which represents the precision of test results, the sensitivity of such results to changes in dosage, and the relevance of the results in the way they reflect the protection afforded in the host species. The graphs can be constructed from Producer's scores and Consumer's scores on each of the scales of test score, antigen dose and probability of protection against disease. A method for calculating these scores is suggested and illustrated for single and multiple component vaccines, for tests which do or do not employ a standard reference preparation, and for tests which employ quantitative or quantal systems of scoring.
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.
2017-01-01
During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.
Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
Comparison of electro-fusion and intracytoplasmic nuclear injection methods in pig cloning.
Kurome, Mayuko; Fujimura, Tatsuya; Murakami, Hiroshi; Takahagi, Yoichi; Wako, Naohiro; Ochiai, Takashi; Miyazaki, Koji; Nagashima, Hiroshi
2003-01-01
This paper methodologically compares the electro-fusion (EF) and intracytoplasmic injection (ICI) methods, as well as simultaneous fusion/activation (SA) and delayed activation (DA), in somatic nuclear transfer in pigs using fetal fibroblast cells. Comparison of the remodeling pattern of donor nuclei after nuclear transfer by ICI or EF showed that a high rate (80-100%) of premature chromosome condensation occurred in both cases whether or not Ca2+ was present in the fusion medium. Formation of pseudo-pronuclei tended to be lower for nuclear transfer performed by the ICI method (65% vs. 85-97%, p < 0.05). In vitro developmental potential of nuclear transfer embryos reconstructed with IVM oocytes using the EF method was higher than that of those produced by the ICI method (blastocyst formation: 19 vs. 5%, p < 0.05), and it was not improved using in vivo-matured oocytes as recipient cytoplasts. Embryos produced using SA protocol developed to blastocysts with the same degree of efficiency as those produced under the DA protocol (11 vs. 12%). Use of the EF method in conjunction with SA was shown to be an efficient method for producing cloned pigs based on producing a cloned normal pig fetus. However, subtle differences in nuclear remodeling patterns between the SA and DA protocols may imply variations in their nuclear reprogramming efficiency.
Acker, Jason P; Hansen, Adele L; Kurach, Jayme D R; Turner, Tracey R; Croteau, Ioana; Jenkins, Craig
2014-10-01
Canadian Blood Services has been conducting quality monitoring of red blood cell (RBC) components since 2005, a period spanning the implementation of semiautomated component production. The aim was to compare the quality of RBC components produced before and after this production method change. Data from 572 RBC units were analyzed, categorized by production method: Method 1, RBC units produced by manual production methods; Method 2, RBC units produced by semiautomated production and the buffy coat method; and Method 3, RBC units produced by semiautomated production and the whole blood filtration method. RBC units were assessed using an extensive panel of in vitro tests, encompassing regulated quality control criteria such as hematocrit (Hct), hemolysis, and hemoglobin (Hb) levels, as well as adenosine triphosphate, 2,3-diphosphoglycerate, extracellular K(+) and Na(+) levels, methemoglobin, p50, RBC indices, and morphology. Throughout the study, all RBC units met mandated Canadian Standards Association guidelines for Hb and Hct, and most (>99%) met hemolysis requirements. However, there were significant differences among RBC units produced using different methods. Hb content was significantly lower in RBC units produced by Method 2 (51.5 ± 5.6 g/unit; p < 0.001). At expiry, hemolysis was lowest in Method 2-produced RBC units (p < 0.05) and extracellular K(+) levels were lowest in units produced by Method 1 (p < 0.001). While overall quality was similar before and after the production method change, the observed differences, although small, indicate a lack of equivalency across RBC products manufactured by different methods. © 2014 AABB.
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Parks, Sean; Holsinger, Lisa M.; Voss, Morgan; Loehman, Rachel A.; Robinson, Nathaniel P.
2018-01-01
Landsat-based fire severity datasets are an invaluable resource for monitoring and research purposes. These gridded fire severity datasets are generally produced with pre-and post-fire imagery to estimate the degree of fire-induced ecological change. Here, we introduce methods to produce three Landsat-based fire severity metrics using the Google Earth Engine (GEE) platform: the delta normalized burn ratio (dNBR), the relativized delta normalized burn ratio (RdNBR), and the relativized burn ratio (RBR). Our methods do not rely on time-consuming a priori scene selection and instead use a mean compositing approach in which all valid pixels (e.g. cloud-free) over a pre-specified date range (pre- and post-fire) are stacked and the mean value for each pixel over each stack is used to produce the resulting fire severity datasets. This approach demonstrates that fire severity datasets can be produced with relative ease and speed compared the standard approach in which one pre-fire and post-fire scene are judiciously identified and used to produce fire severity datasets. We also validate the GEE-derived fire severity metrics using field-based fire severity plots for 18 fires in the western US. These validations are compared to Landsat-based fire severity datasets produced using only one pre- and post-fire scene, which has been the standard approach in producing such datasets since their inception. Results indicate that the GEE-derived fire severity datasets show improved validation statistics compared to parallel versions in which only one pre-fire and post-fire scene are used. We provide code and a sample geospatial fire history layer to produce dNBR, RdNBR, and RBR for the 18 fires we evaluated. Although our approach requires that a geospatial fire history layer (i.e. fire perimeters) be produced independently and prior to applying our methods, we suggest our GEE methodology can reasonably be implemented on hundreds to thousands of fires, thereby increasing opportunities for fire severity monitoring and research across the globe.
ERIC Educational Resources Information Center
Pijl, Sip Jan; Koster, Marloes; Hannink, Anne; Stratingh, Anna
2011-01-01
One of the methods used most often to assess students' friendships and friendship networks is the reciprocal nomination method. However, an often heard complaint is that this technique produces rather negative outcomes. This study compares the reciprocal nomination method with another method to assess students' friendships and friendship networks:…
Inference Control Mechanism for Statistical Database: Frequency-Imposed Data Distortions.
ERIC Educational Resources Information Center
Liew, Chong K.; And Others
1985-01-01
Introduces two data distortion methods (Frequency-Imposed Distortion, Frequency-Imposed Probability Distortion) and uses a Monte Carlo study to compare their performance with that of other distortion methods (Point Distortion, Probability Distortion). Indications that data generated by these two methods produce accurate statistics and protect…
Barrows, F.T.; Lellis, W.A.
2006-01-01
Two methods were developed for the production of larval fish diets. The first method, microextrusion marumerization (MEM), has been tested in laboratory feeding trials for many years and produces particles that are palatable and water stable. The second method, particle-assisted rotational agglomeration (PARA), produced diets that have lower density than diets produced by MEM. Each method was used to produce diets in the 250- to 400- and 400- to 700-??m range and compared with a reference diet (Fry Feed Kyowa* [FFK]) for feeding larval walleye in two experiments. The effect of substituting 4% of the fish meal with freeze-dried artemia fines was also investigated. In the first experiment, 30-d survival was greater (P < 0.05) for fish fed a diet produced by PARA without Artemia (49.1.0%) than for fish fed the same diet produced by MEM (27.6%). The addition of Artemia to a diet produced by MEM did not increase survival of larval walleye. Fish fed the reference diet had 24.4% survival. In the second experiment, there was an effect of both processing method and Artemia supplementation, and an interaction of these effects, on survival. Fish fed a diet produced by PARA without Artemia supplementation had 48.4% survival, and fish fed the same diet produced by MEM had only 19.6% survival. Inclusion of 4% freeze-dried Artemia improved (P < 0.04) survival of fish fed MEM particles but not those fed PARA particles. Fish fed FFK had greater weight gain than fish fed other diets in both experiments. Data indicate that the PARA method of diet processing produces smaller, lower density particles than the MEM process and that diets produced by the PARA process support higher survival of larval walleye with low capital and operating costs. ?? Copyright by the World Aquaculture Society 2006.
Woksepp, Hanna; Jernberg, Cecilia; Tärnberg, Maria; Ryberg, Anna; Brolund, Alma; Nordvall, Michaela; Olsson-Liljequist, Barbro; Wisell, Karin Tegmark; Monstein, Hans-Jürg; Nilsson, Lennart E.; Schön, Thomas
2011-01-01
Methods for the confirmation of nosocomial outbreaks of bacterial pathogens are complex, expensive, and time-consuming. Recently, a method based on ligation-mediated PCR (LM/PCR) using a low denaturation temperature which produces specific melting-profile patterns of DNA products has been described. Our objective was to further develop this method for real-time PCR and high-resolution melting analysis (HRM) in a single-tube system optimized in order to achieve results within 1 day. Following the optimization of LM/PCR for real-time PCR and HRM (LM/HRM), the method was applied for a nosocomial outbreak of extended-spectrum-beta-lactamase (ESBL)-producing and ST131-associated Escherichia coli isolates (n = 15) and control isolates (n = 29), including four previous clusters. The results from LM/HRM were compared to results from pulsed-field gel electrophoresis (PFGE), which served as the gold standard. All isolates from the nosocomial outbreak clustered by LM/HRM, which was confirmed by gel electrophoresis of the LM/PCR products and PFGE. Control isolates that clustered by LM/PCR (n = 4) but not by PFGE were resolved by confirmatory gel electrophoresis. We conclude that LM/HRM is a rapid method for the detection of nosocomial outbreaks of bacterial infections caused by ESBL-producing E. coli strains. It allows the analysis of isolates in a single-tube system within a day, and the discriminatory power is comparable to that of PFGE. PMID:21956981
Woksepp, Hanna; Jernberg, Cecilia; Tärnberg, Maria; Ryberg, Anna; Brolund, Alma; Nordvall, Michaela; Olsson-Liljequist, Barbro; Wisell, Karin Tegmark; Monstein, Hans-Jürg; Nilsson, Lennart E; Schön, Thomas
2011-12-01
Methods for the confirmation of nosocomial outbreaks of bacterial pathogens are complex, expensive, and time-consuming. Recently, a method based on ligation-mediated PCR (LM/PCR) using a low denaturation temperature which produces specific melting-profile patterns of DNA products has been described. Our objective was to further develop this method for real-time PCR and high-resolution melting analysis (HRM) in a single-tube system optimized in order to achieve results within 1 day. Following the optimization of LM/PCR for real-time PCR and HRM (LM/HRM), the method was applied for a nosocomial outbreak of extended-spectrum-beta-lactamase (ESBL)-producing and ST131-associated Escherichia coli isolates (n = 15) and control isolates (n = 29), including four previous clusters. The results from LM/HRM were compared to results from pulsed-field gel electrophoresis (PFGE), which served as the gold standard. All isolates from the nosocomial outbreak clustered by LM/HRM, which was confirmed by gel electrophoresis of the LM/PCR products and PFGE. Control isolates that clustered by LM/PCR (n = 4) but not by PFGE were resolved by confirmatory gel electrophoresis. We conclude that LM/HRM is a rapid method for the detection of nosocomial outbreaks of bacterial infections caused by ESBL-producing E. coli strains. It allows the analysis of isolates in a single-tube system within a day, and the discriminatory power is comparable to that of PFGE.
Robust estimation of pulse wave transit time using group delay.
Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C
2014-03-01
To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.
2017-03-01
The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.
Method for Surface Texturing Titanium Products
NASA Technical Reports Server (NTRS)
Banks, Bruce A. (Inventor)
1998-01-01
The present invention teaches a method of producing a textured surface upon an arbitrarily configured titanium or titanium alloy object for the purpose of improving bonding between the object and other materials such as polymer matrix composites and/or human bone for the direct in-growth of orthopaedic implants. The titanium or titanium alloy object is placed in an electrolytic cell having an ultrasonically agitated solution of sodium chloride therein whereby a pattern of uniform "pock mark" like pores or cavities are produced upon the object's surface. The process is very cost effective compared to other methods of producing rough surfaces on titanium and titanium alloy components. The surface textures produced by the present invention are etched directly into the parent metal at discrete sites separated by areas unaffected by the etching process. Bonding materials to such surface textures on titanium or titanium alloy can thus support a shear load even if adhesion of the bonding material is poor.
Lopez-Haro, S. A.; Leija, L.
2016-01-01
Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801
Comparing extraction rates of fossil fuel producers against global climate goals
NASA Astrophysics Data System (ADS)
Rekker, Saphira A. C.; O'Brien, Katherine R.; Humphrey, Jacquelyn E.; Pascale, Andrew C.
2018-06-01
Meeting global and national climate goals requires action and cooperation from a multitude of actors1,2. Current methods to define greenhouse gas emission targets for companies fail to acknowledge the unique influence of fossil fuel producers: combustion of reported fossil fuel reserves has the potential to push global warming above 2 °C by 2050, regardless of other efforts to mitigate climate change3. Here, we introduce a method to compare the extraction rates of individual fossil fuel producers against global climate targets, using two different approaches to quantify a burnable fossil fuel allowance (BFFA). BFFAs are calculated and compared with cumulative extraction since 2010 for the world's ten largest investor-owned companies and ten largest state-owned entities (SOEs), for oil and for gas, which together account for the majority of global oil and gas reserves and production. The results are strongly influenced by how BFFAs are quantified; allocating based on reserves favours SOEs over investor-owned companies, while allocating based on production would require most reduction to come from SOEs. Future research could refine the BFFA to account for equity, cost-effectiveness and emissions intensity.
Heinrichs, A; Huang, T D; Berhin, C; Bogaerts, P; Glupczynski, Y
2015-07-01
The purpose of this investigation was to compare several phenotypic methods, including combined disk tests (CDT) containing metallo-β-lactamase (MBL) inhibitors or cloxacillin, and the Carba NP test for the detection of carbapenemase-producing Pseudomonas aeruginosa (CPPA). A new CDT using imipenem (10 μg) ± cloxacillin 4,000 μg and the Carba NP test were evaluated to detect CPPA. In addition, four commercially available combined disks containing a carbapenem and ethylene-diamine-tetra-acetic acid (EDTA) or dipicolinic acid (DPA) as the inhibitor were tested in order to detect MBL-positive P. aeruginosa. All these phenotypic methods were evaluated on 188 imipenem non-susceptible P. aeruginosa (CPPA, n = 75) isolates divided into 26 well-characterized collection strains and 162 non-duplicate clinical isolates referred to the national reference laboratory in 2013. For the total of 188 isolates tested, CDT containing EDTA or DPA displayed high sensitivities (99%) and specificities (95%) for detecting MBL-producing isolates. CDT with cloxacillin showed a sensitivity and specificity of 97%/96% compared to 88%/99% for the Carba NP test in order to detect CPPA. For the 162 clinical isolates, CDT containing EDTA or DPA displayed a high negative predictive value (NPV) (99%) for detecting MBL-producing isolates. CDT with cloxacillin showed an NPV of 98%, compared to 95% for the Carba NP test in order to detect CPPA. In our setting, CDT associating imipenem ± EDTA or ± DPA performed best for the detection of MBL-producing P. aeruginosa. Imipenem/imipenem-cloxacillin test yielded good NPV to exclude the presence of MBL in imipenem non-susceptible isolates.
Comparison of prosthetic models produced by traditional and additive manufacturing methods.
Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul
2015-08-01
The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.
Delineating baseflow contribution areas for streams - A model and methods comparison.
Chow, Reynold; Frind, Michael E; Frind, Emil O; Jones, Jon P; Sousa, Marcelo R; Rudolph, David L; Molson, John W; Nowak, Wolfgang
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Relating marten scat contents to prey consumed
William J. Zielinski
1986-01-01
A European ferret, Mustela putorius furo, was fed typical marten food items to discover the relationship between prey weight and number of scats produced per unit weight of prey. A correction factor was derived that was used in the analysis of pine marten, Martes americana, scats to produce a method capable of comparing foods on a...
Method for rapidly producing microporous and mesoporous materials
Coronado, Paul R.; Poco, John F.; Hrubesh, Lawrence W.; Hopper, Robert W.
1997-01-01
An improved, rapid process is provided for making microporous and mesoporous materials, including aerogels and pre-ceramics. A gel or gel precursor is confined in a sealed vessel to prevent structural expansion of the gel during the heating process. This confinement allows the gelation and drying processes to be greatly accelerated, and significantly reduces the time required to produce a dried aerogel compared to conventional methods. Drying may be performed either by subcritical drying with a pressurized fluid to expel the liquid from the gel pores or by supercritical drying. The rates of heating and decompression are significantly higher than for conventional methods.
NASA Astrophysics Data System (ADS)
Ainiwaer, A.; Gurrola, H.
2018-03-01
Common conversion point stacking or migration of receiver functions (RFs) and H-k (H is depth and k is Vp/Vs) stacking of RFs has become a common method to study the crust and upper mantle beneath broad-band three-component seismic stations. However, it can be difficult to interpret Pds RFs due to interference between the Pds, PPds and PSds phases, especially in the mantle portion of the lithosphere. We propose a phase separation method to isolate the prominent phases of the RFs and produce separate Pds, PPds and PSds `phase specific' receiver functions (referred to as PdsRFs, PPdsRFs and PSdsRFs, respectively) by deconvolution of the wavefield rather than single seismograms. One of the most important products of this deconvolution method is to produce Ps receiver functions (PdsRFs) that are free of crustal multiples. This is accomplished by using H-k analysis to identify specific phases in the wavefield from all seismograms recorded at a station which enables development of an iterative deconvolution procedure to produce the above-mentioned phase specific RFs. We refer to this method as wavefield iterative deconvolution (WID). The WID method differentiates and isolates different RF phases by exploiting their differences in moveout curves across the entire wave front. We tested the WID by applying it to synthetic seismograms produced using a modified version of the PREM velocity model. The WID effectively separates phases from each stacked RF in synthetic data. We also applied this technique to produce RFs from seismograms recorded at ARU (a broad-band station in Arti, Russia). The phase specific RFs produced using WID are easier to interpret than traditional RFs. The PdsRFs computed using WID are the most improved, owing to the distinct shape of its moveout curves as compared to the moveout curves for the PPds and PSds phases. The importance of this WID method is most significant in reducing interference between phases for depths of less than 300 km. Phases from deeper layers (i.e. P660s as compared to PP220s) are less likely to be misinterpreted because the large amount of moveout causes the appropriate phases to stack coherently if there is sufficient distribution in ray parameter. WID is most effective in producing clean PdsRFs that are relatively free of reverberations whereas PPdsRFs and PSdsRFs retain contamination from reverberations.
USDA-ARS?s Scientific Manuscript database
Shiga toxin-producing E. coli (STEC) O157:H7 and serogroups O26, O45, O103, O111, O121, and O145 are often referred to as the “top 7” STEC, and these have been declared as adulterants in beef by the USDA Food Safety and Inspection Service (FSIS). The aim of this work was to compare the methods des...
Speech-Enabled Interfaces for Travel Information Systems with Large Grammars
NASA Astrophysics Data System (ADS)
Zhao, Baoli; Allen, Tony; Bargiela, Andrzej
This paper introduces three grammar-segmentation methods capable of handling the large grammar issues associated with producing a real-time speech-enabled VXML bus travel application for London. Large grammars tend to produce relatively slow recognition interfaces and this work shows how this limitation can be successfully addressed. Comparative experimental results show that the novel last-word recognition based grammar segmentation method described here achieves an optimal balance between recognition rate, speed of processing and naturalness of interaction.
Manuilov, Anton V; Radziejewski, Czeslaw H
2011-01-01
Comparability studies lie at the heart of assessments that evaluate differences amongst manufacturing processes and stability studies of protein therapeutics. Low resolution chromatographic and electrophoretic methods facilitate quantitation, but do not always yield detailed insight into the effect of the manufacturing change or environmental stress. Conversely, mass spectrometry (MS) can provide high resolution information on the molecule, but conventional methods are not very quantitative. This gap can be reconciled by use of a stable isotope-tagged reference standard (SITRS), a version of the analyte protein that is uniformly labeled with 13C6-arginine and 13C6-lysine. The SITRS serves as an internal control that is trypsin-digested and analyzed by liquid chromatography (LC)-MS with the analyte sample. The ratio of the ion intensities of each unlabeled and labeled peptide pair is then compared to that of other sample(s). A comparison of these ratios provides a readily accessible way to spot even minute differences among samples. In a study of a monoclonal antibody (mAb) spiked with varying amounts of the same antibody bearing point mutations, peptides containing the mutations were readily identified and quantified at concentrations as low as 2% relative to unmodified peptides. The method was robust, reproducible and produced a linear response for every peptide that was monitored. The method was also successfully used to distinguish between two batches of a mAb that were produced in two different cell lines while two batches produced from the same cell line were found to be highly comparable. Finally, the use of the SITRS method in the comparison of two stressed mAb samples enabled the identification of sites susceptible to deamidation and oxidation, as well as their quantitation. The experimental results indicate that use of a SITRS in a peptide mapping experiment with MS detection enables sensitive and quantitative comparability studies of proteins at high resolution. PMID:21654206
Manuilov, Anton V; Radziejewski, Czeslaw H; Lee, David H
2011-01-01
Comparability studies lie at the heart of assessments that evaluate differences amongst manufacturing processes and stability studies of protein therapeutics. Low resolution chromatographic and electrophoretic methods facilitate quantitation, but do not always yield detailed insight into the effect of the manufacturing change or environmental stress. Conversely, mass spectrometry (MS) can provide high resolution information on the molecule, but conventional methods are not very quantitative. This gap can be reconciled by use of a stable isotope-tagged reference standard (SITRS), a version of the analyte protein that is uniformly labeled (13)C6-arginine and (13)C6-lysine. The SITRS serves as an internal control that is trypsin-digested and analyzed by liquid chromatography (LC)-MS with the analyte sample. The ratio of the ion intensities of each unlabeled and labeled peptide pair is then compared to that of other sample(s). A comparison of these ratios provides a readily accessible way to spot even minute differences among samples. In a study of a monoclonal antibody (mAb) spiked with varying amounts of the same antibody bearing point mutations, peptides containing the mutations were readily identified and quantified at concentrations as low as 2% relative to unmodified peptides. The method is robust, reproducible and produced a linear response for every peptide that was monitored. The method was also successfully used to distinguish between two batches of a mAb that were produced in two different cell lines while two batches produced from the same cell line were found to be highly comparable. Finally, the use of the SITRS method in the comparison of two stressed mAb samples enabled the identification of sites susceptible to deamidation and oxidation, as well as their quantitation. The experimental results indicate that use of a SITRS in a peptide mapping experiment with MS detection enables sensitive and quantitative comparability studies of proteins at high resolution.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
Wan Nor Amilah, W A W; Noor Izani, N J; Ng, W K; Ashraful Haq, J
2012-12-01
Clinical utilization of carbapenems remains under threat with the emergence of acquired carbapenemase-producing bacteria, particularly metallo-β-lactamases (MBL). Rapid detection of MBL-producing Gram-negative bacilli is essential to prevent their widespread dissemination. However, no standardized detection method is available for routine laboratory use. The purpose of the study was to evaluate a chelating-agent based double disk synergic test and disk potentiation test for MBL-producing strain detection and to determine the isolation rate of MBL-producing Pseudomonas aeruginosa and Acinetobacter from clinical samples in our tertiary teaching hospital. A total of 22 and 66 imipenem-resistant P. aeruginosa and Acinetobacter isolates respectively were tested with ceftazidime (CAZ) disk by modified double disk synergic test and disk potentiation test using ethylenediaminetetraacetic acid (EDTA) and 2-mercaptopropionic acid (as chelating agents) to detect MBL production. The tests were compared with EDTA-phenanthroline-imipenem (EPI) microdilution MIC test as gold standard. MBL positive strains were detected in 17 (77.3%) P. aeruginosa and 2 (3.5%) Acinetobacter isolates. The disk potentiation test with 2-mercaptopropionic acid (2-MPA) dilution of 1:12 provided the most acceptable sensitivities and specificities (88.2% sensitivity and 100% specificity in P. aeruginosa; 100% sensitivity and specificity in Acinetobacter) compared to other screening methods used in this study. This study provided useful information on the local prevalence of MBL-producing P. aeruginosa and Acinetobacter in our hospital. Disc potentiation test with CAZ/2-MPA disc appears to be reliable and convenient MBL detection method in the routine clinical laboratory.
Feasibility of zero tolerance for Salmonella on raw poultry
USDA-ARS?s Scientific Manuscript database
Ideally, poultry producing countries around the globe should use internationally standardized sampling methods for Salmonella. It is difficult to compare prevalence data from country-to-country when sample plan, sample type, sample frequency and laboratory media along with methods differ. The Europe...
The Measurement of Magnetic Fields
ERIC Educational Resources Information Center
Berridge, H. J. J.
1973-01-01
Discusses five experimental methods used by senior high school students to provide an accurate calibration curve of magnet current against the magnetic flux density produced by an electromagnet. Compares the relative merits of the five methods, both as measurements and from an educational viewpoint. (JR)
Approaches for Evaluating the Usability of Assistive Technology Product Prototypes
ERIC Educational Resources Information Center
Choi, Young Mi; Sprigle, Stephen H.
2011-01-01
User input is an important component to help guide designers in producing a more usable product. Evaluation of prototypes is one method of obtaining this input, but methods for evaluating assistive technology prototypes during design have not been adequately described or evaluated. This project aimed to compare different methods of evaluating…
NASA Astrophysics Data System (ADS)
Yuliasmi, S.; Pardede, T. R.; Nerdy; Syahputra, H.
2017-03-01
Oil palm midrib is one of the waste generated by palm plants containing 34.89% cellulose. Cellulose has the potential to produce microcrystalline cellulose can be used as an excipient in tablet formulations by direct compression. Microcrystalline cellulose is the result of a controlled hydrolysis of alpha cellulose, so the alpha cellulose extraction process of oil palm midrib greatly affect the quality of the resulting microcrystalline cellulose. The purpose of this study was to compare the microcrystalline cellulose produced from alpha cellulose extracted from oil palm midrib by two different methods. Fisrt delignization method uses sodium hydroxide. Second method uses a mixture of nitric acid and sodium nitrite, and continued with sodium hydroxide and sodium sulfite. Microcrystalline cellulose obtained by both method was characterized separately, including organoleptic test, color reagents test, dissolution test, pH test and determination of functional groups by FTIR. The results was compared with microcrystalline cellulose which has been available on the market. The characterization results showed that microcrystalline cellulose obtained by first method has the most similar characteristics to the microcrystalline cellulose available in the market.
NASA Astrophysics Data System (ADS)
Ono, Ryo; Tokumitsu, Yusuke; Zen, Shungo; Yonemori, Seiya
2014-11-01
We propose a method for producing OH, H, O, O3, and O2(a1Δg) using the vacuum ultraviolet photodissociation of H2O and O2 as a tool for studying the reaction processes of plasma medicine. For photodissociation, an H2O/He or O2/He mixture flowing in a quartz tube is irradiated by a Xe2 or Kr2 excimer lamp. The effluent can be applied to a target. Simulations show that the Xe2 lamp method can produce OH radicals within 0.1-1 ppm in the effluent at 5 mm from a quartz tube nozzle. This is comparable to those produced by a helium atmospheric-pressure plasma jet (He-APPJ) currently used in plasma medicine. The Xe2 lamp method also produces H atoms of, at most, 6 ppm. In contrast, the maximum O densities produced by the Xe2 and Kr2 lamp methods are 0.15 ppm and 2.5 ppm, respectively; these are much lower than those from He-APPJ (several tens of ppm). Both lamp methods can produce ozone at concentrations above 1000 ppm and O2(a1Δg) at tens of ppm. The validity of the simulations is verified by measuring the O3 and OH densities produced by the Xe2 lamp method using ultraviolet absorption and laser-induced fluorescence. The differences between the measured and simulated densities for O3 and OH are 20% and factors of 3-4, respectively.
Method for producing highly reflective metal surfaces
Arnold, J.B.; Steger, P.J.; Wright, R.R.
1982-03-04
The invention is a novel method for producing mirror surfaces which are extremely smooth and which have high optical reflectivity. The method includes depositing, by electrolysis, an amorphous layer of nickel on an article and then diamond-machining the resulting nickel surface to increase its smoothness and reflectivity. The machined nickel surface then is passivated with respect to the formation of bonds with electrodeposited nickel. Nickel then is electrodeposited on the passivated surface to form a layer of electroplated nickel whose inside surface is a replica of the passivated surface. The mandrel then may be-re-passivated and provided with a layer of electrodeposited nickel, which is then recovered from the mandrel providing a second replica. The mandrel can be so re-used to provide many such replicas. As compared with producing each mirror-finished article by plating and diamond-machining, the new method is faster and less expensive.
Method for producing highly reflective metal surfaces
Arnold, Jones B.; Steger, Philip J.; Wright, Ralph R.
1983-01-01
The invention is a novel method for producing mirror surfaces which are extremely smooth and which have high optical reflectivity. The method includes electrolessly depositing an amorphous layer of nickel on an article and then diamond-machining the resulting nickel surface to increase its smoothness and reflectivity. The machined nickel surface then is passivated with respect to the formation of bonds with electrodeposited nickel. Nickel then is electrodeposited on the passivated surface to form a layer of electroplated nickel whose inside surface is a replica of the passivated surface. The electroplated nickel layer then is separated from the passivated surface. The mandrel then may be re-passivated and provided with a layer of electrodeposited nickel, which is then recovered from the mandrel providing a second replica. The mandrel can be so re-used to provide many such replicas. As compared with producing each mirror-finished article by plating and diamond-machining, the new method is faster and less expensive.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
The comparative method of language acquisition research: a Mayan case study.
Pye, Clifton; Pfeiler, Barbara
2014-03-01
This article demonstrates how the Comparative Method can be applied to cross-linguistic research on language acquisition. The Comparative Method provides a systematic procedure for organizing and interpreting acquisition data from different languages. The Comparative Method controls for cross-linguistic differences at all levels of the grammar and is especially useful in drawing attention to variation in contexts of use across languages. This article uses the Comparative Method to analyze the acquisition of verb suffixes in two Mayan languages: K'iche' and Yucatec. Mayan status suffixes simultaneously mark distinctions in verb transitivity, verb class, mood, and clause position. Two-year-old children acquiring K'iche' and Yucatec Maya accurately produce the status suffixes on verbs, in marked distinction to the verbal prefixes for aspect and agreement. We find evidence that the contexts of use for the suffixes differentially promote the children's production of cognate status suffixes in K'iche' and Yucatec.
Cupping - is it reproducible? Experiments about factors determining the vacuum.
Huber, R; Emerich, M; Braeunig, M
2011-04-01
Cupping is a traditional method for treating pain which is investigated nowadays in clinical studies. Because the methods for producing the vacuum vary considerably we tested their reproducibility. In a first set of experiments (study 1) four methods for producing the vacuum (lighter flame 2 cm (LF1), lighter flame 4 cm (LF2), alcohol flame (AF) and mechanical suction with a balloon (BA)) have been compared in 50 trials each. The cupping glass was prepared with an outlet and stop-cock, the vacuum was measured with a pressure-gauge after the cup was set to a soft rubber pad. In a second series of experiments (study 2) we investigated the stability of pressures in 20 consecutive trials in two experienced cupping practitioners and ten beginners using method AF. In study 1 all four methods yielded consistent pressures. Large differences in magnitude were, however, observed between methods (mean pressures -200±30 hPa with LF1, -310±30 hPa with LF2, -560±30 hPa with AF, and -270±16 hPa with BA). With method BA the standard deviation was reduced by a factor 2 compared to the flame methods. In study 2 beginners had considerably more difficulty obtaining a stable pressure yield than advanced cupping practitioners, showing a distinct learning curve before reaching expertise levels after about 10-20 trials. Cupping is reproducible if the exact method is described in detail. Mechanical suction with a balloon has the best reproducibility. Beginners need at least 10-20 trials to produce stable pressures. Copyright © 2010 Elsevier Ltd. All rights reserved.
Comparison of salivary collection and processing methods for quantitative HHV-8 detection.
Speicher, D J; Johnson, N W
2014-10-01
Saliva is a proved diagnostic fluid for the qualitative detection of infectious agents, but the accuracy of viral load determinations is unknown. Stabilising fluids impede nucleic acid degradation, compared with collection onto ice and then freezing, and we have shown that the DNA Genotek P-021 prototype kit (P-021) can produce high-quality DNA after 14 months of storage at room temperature. Here we evaluate the quantitative capability of 10 collection/processing methods. Unstimulated whole mouth fluid was spiked with a mixture of HHV-8 cloned constructs, 10-fold serial dilutions were produced, and samples were extracted and then examined with quantitative PCR (qPCR). Calibration curves were compared by linear regression and qPCR dynamics. All methods extracted with commercial spin columns produced linear calibration curves with large dynamic range and gave accurate viral loads. Ethanol precipitation of the P-021 does not produce a linear standard curve, and virus is lost in the cell pellet. DNA extractions from the P-021 using commercial spin columns produced linear standard curves with wide dynamic range and excellent limit of detection. When extracted with spin columns, the P-021 enables accurate viral loads down to 23 copies μl(-1) DNA. The quantitative and long-term storage capability of this system makes it ideal for study of salivary DNA viruses in resource-poor settings. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
Cagliero, Cecilia; Ho, Tien D; Zhang, Cheng; Bicchi, Carlo; Anderson, Jared L
2016-06-03
This study describes a simple and rapid sampling method employing a polymeric ionic liquid (PIL) sorbent coating in direct immersion solid-phase microextraction (SPME) for the trace-level analysis of acrylamide in brewed coffee and coffee powder. The crosslinked PIL sorbent coating demonstrated superior sensitivity in the extraction of acrylamide compared to all commercially available SPME coatings. A spin coating method was developed to evenly distribute the PIL coating on the SPME support and reproducibly produce fibers with a large film thickness. Ninhydrin was employed as a quenching reagent during extraction to inhibit the production of interfering acrylamide. The PIL fiber produced a limit of quantitation for acrylamide of 10μgL(-1) and achieved comparable results to the ISO method in the analysis of six coffee powder samples. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Czarnecki, S.; Williams, S.
2017-12-01
The accuracy of a method for measuring the effective atomic numbers of minerals using bremsstrahlung intensities has been investigated. The method is independent of detector-efficiency and maximum accelerating voltage. In order to test the method, experiments were performed which involved low-energy electrons incident on thick malachite, pyrite, and galena targets. The resultant thick-target bremsstrahlung was compared to bremsstrahlung produced using a standard target, and experimental effective atomic numbers were calculated using data from a previous study (in which the Z-dependence of thick-target bremsstrahlung was studied). Comparisons of the results to theoretical values suggest that the method has potential for implementation in energy-dispersive X-ray spectroscopy systems.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-01-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system. PMID:24912488
NASA Astrophysics Data System (ADS)
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Method for producing silicon thin-film transistors with enhanced forward current drive
Weiner, K.H.
1998-06-30
A method is disclosed for fabricating amorphous silicon thin film transistors (TFTs) with a polycrystalline silicon surface channel region for enhanced forward current drive. The method is particularly adapted for producing top-gate silicon TFTs which have the advantages of both amorphous and polycrystalline silicon TFTs, but without problem of leakage current of polycrystalline silicon TFTs. This is accomplished by selectively crystallizing a selected region of the amorphous silicon, using a pulsed excimer laser, to create a thin polycrystalline silicon layer at the silicon/gate-insulator surface. The thus created polysilicon layer has an increased mobility compared to the amorphous silicon during forward device operation so that increased drive currents are achieved. In reverse operation the polysilicon layer is relatively thin compared to the amorphous silicon, so that the transistor exhibits the low leakage currents inherent to amorphous silicon. A device made by this method can be used, for example, as a pixel switch in an active-matrix liquid crystal display to improve display refresh rates. 1 fig.
Method for producing silicon thin-film transistors with enhanced forward current drive
Weiner, Kurt H.
1998-01-01
A method for fabricating amorphous silicon thin film transistors (TFTs) with a polycrystalline silicon surface channel region for enhanced forward current drive. The method is particularly adapted for producing top-gate silicon TFTs which have the advantages of both amorphous and polycrystalline silicon TFTs, but without problem of leakage current of polycrystalline silicon TFTs. This is accomplished by selectively crystallizing a selected region of the amorphous silicon, using a pulsed excimer laser, to create a thin polycrystalline silicon layer at the silicon/gate-insulator surface. The thus created polysilicon layer has an increased mobility compared to the amorphous silicon during forward device operation so that increased drive currents are achieved. In reverse operation the polysilicon layer is relatively thin compared to the amorphous silicon, so that the transistor exhibits the low leakage currents inherent to amorphous silicon. A device made by this method can be used, for example, as a pixel switch in an active-matrix liquid crystal display to improve display refresh rates.
Lower pressure synthesis of diamond material
Lueking, Angela; Gutierrez, Humberto; Narayanan, Deepa; Burgess Clifford, Caroline E.; Jain, Puja
2010-07-13
Methods of synthesizing a diamond material, particularly nanocrystalline diamond, diamond-like carbon and bucky diamond are provided. In particular embodiments, a composition including a carbon source, such as coal, is subjected to addition of energy, such as high energy reactive milling, producing a milling product enriched in hydrogenated tetrahedral amorphous diamond-like carbon compared to the coal. A milling product is treated with heat, acid and/or base to produce nanocrystalline diamond and/or crystalline diamond-like carbon. Energy is added to produced crystalline diamond-like carbon in particular embodiments to produce bucky diamonds.
Comparative Modeling of Proteins: A Method for Engaging Students' Interest in Bioinformatics Tools
ERIC Educational Resources Information Center
Badotti, Fernanda; Barbosa, Alan Sales; Reis, André Luiz Martins; do Valle, Ítalo Faria; Ambrósio, Lara; Bitar, Mainá
2014-01-01
The huge increase in data being produced in the genomic era has produced a need to incorporate computers into the research process. Sequence generation, its subsequent storage, interpretation, and analysis are now entirely computer-dependent tasks. Universities from all over the world have been challenged to seek a way of encouraging students to…
Healy, Judith Mary; Tang, Shenglan; Patcharanarumol, Walaiporn; Annear, Peter Leslie
2018-04-01
Drawing on published work from the Asia Pacific Observatory on Health Systems and Policies, this paper presents a framework for undertaking comparative studies on the health systems of countries. Organized under seven types of research approaches, such as national case-studies using a common format, this framework is illustrated using studies of low- and middle-income countries published by the Asia Pacific Observatory. Such studies are important contributions, since much of the health systems research literature comes from high-income countries. No one research approach, however, can adequately analyse a health system, let alone produce a nuanced comparison of different countries. Multiple comparative studies offer a better understanding, as a health system is a complex entity to describe and analyse. Appreciation of context and culture is crucial: what works in one country may not do so in another. Further, a single research method, such as performance indicators, or a study of a particular health system function or component, produces only a partial picture. Applying a comparative framework of several study approaches helps to inform and explain progress against health system targets, to identify differences among countries, and to assess policies and programmes. Multi-method comparative research produces policy-relevant learning that can assist countries to achieve Sustainable Development Goal 3: ensure healthy lives and promoting well-being for all at all ages by 2030.
Creating Tic Suppression: Comparing the Effects of Verbal Instruction to Differential Reinforcement
ERIC Educational Resources Information Center
Woods, Douglas W.; Himle, Michael B.
2004-01-01
The purpose of this study was to compare two methods designed to produce tic reduction in 4 children with Tourette's syndrome. Specifically, a verbal instruction not to engage in tics was compared to a verbal instruction plus differential reinforcement of zero-rate behavior (DRO). Results showed that the DRO-enhanced procedure yielded greater…
Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.
Li, Ben; Stenstrom, M K
2014-01-01
One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.
Tang, P; Brouwers, H J H
2017-04-01
The cold-bonding pelletizing technique is applied in this study as an integrated method to recycle municipal solid waste incineration (MSWI) bottom ash fines (BAF, 0-2mm) and several other industrial powder wastes. Artificial lightweight aggregates are produced successfully based on the combination of these solid wastes, and the properties of these artificial aggregates are investigated and then compared with others' results reported in literature. Additionally, methods for improving the aggregate properties are suggested, and the corresponding experimental results show that increasing the BAF amount, higher binder content and addition of polypropylene fibres can improve the pellet properties (bulk density, crushing resistance, etc.). The mechanisms regarding to the improvement of the pellet properties are discussed. Furthermore, the leaching behaviours of contaminants from the produced aggregates are investigated and compared with Dutch environmental legislation. The application of these produced artificial lightweight aggregates are proposed according to their properties. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lakshmanan, P; Loh, C S; Goh, C J
1995-05-01
A thin section culture system for rapid regeneration of the monopodial orchid hybrid Aranda Deborah has been developed. Thin sections (0.6-0.7mm thick) obtained by transverse sectioning of a single shoot tip (6-7mm), when cultured in Vacin and Went medium enriched with coconut water (20% v/v), produced an average 13.6 protocorm-like bodies (PLB) after 45 days, compared to 2.7 PLB formed by a single 6-7 mm long shoot tip under same culture condition. Addition of α-naphthaleneacetic acid to Vacin and Went medium enriched with coconut water further increased PLB production by thin sections. PLB developed into plantlets on solid Vacin and Went medium containing 10% (v/v) coconut water and 0.5 g l(-1) activated charcoal. With this procedure, more than 80,000 plantlets could be produced from thin sections obtained from a single shoot tip in a year as compared to nearly 11,000 plantlets produced by the conventional shoot tip method.
Evaluation of contents-based image retrieval methods for a database of logos on drug tablets
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien
2001-02-01
In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.
Bacterial agents and cell components can be spread as bioaerosols, producing infections and asthmatic problems. This study compares four methods for the detection and enumeration of aerosolized bacteria collected in an AGI-30 impinger. Changes in the total and viable concentratio...
Method for rapidly producing microporous and mesoporous materials
Coronado, P.R.; Poco, J.F.; Hrubesh, L.W.; Hopper, R.W.
1997-11-11
An improved, rapid process is provided for making microporous and mesoporous materials, including aerogels and pre-ceramics. A gel or gel precursor is confined in a sealed vessel to prevent structural expansion of the gel during the heating process. This confinement allows the gelation and drying processes to be greatly accelerated, and significantly reduces the time required to produce a dried aerogel compared to conventional methods. Drying may be performed either by subcritical drying with a pressurized fluid to expel the liquid from the gel pores or by supercritical drying. The rates of heating and decompression are significantly higher than for conventional methods. 3 figs.
Variants of glycerol dehydrogenase having D-lactate dehydrogenase activity and uses thereof
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qingzhao; Shanmugam, Keelnatham T.; Ingram, Lonnie O'Neal
The present invention provides methods of designing and generating glycerol dehydrogenase (GlyDH) variants that have altered function as compared to a parent polypeptide. The present invention further provides nucleic acids encoding GlyDH polypeptide variants having altered function as compared to the parent polypeptide. Host cells comprising polynucleotides encoding GlyDH variants and methods of producing lactic acids are also provided in various aspects of the invention.
NASA Astrophysics Data System (ADS)
Arnold, Luc
2013-07-01
I compare three methods for transmitting signals over interstellar distances: radio transmitters, lasers and artificial transits. The quantitative comparison is based on physical quantities depending on energy cost and transmitting time L, the last parameter in the Drake equation. With our assumptions, radio transmitters are the most energy-effective, while macro-engineered planetary-sized objects producing artificial transits seem effective on the long term to transmit an attention-getting signal for a time that might be much longer than the lifetime of the civilization that produced the artefact.
Tigecycline activity against metallo-β-lactamase-producing bacteria.
Kumar, Simit; Bandyopadhyay, Maitreyi; Mondal, Soma; Pal, Nupur; Ghosh, Tapashi; Bandyopadhyay, Manas; Banerjee, Parthajit
2013-10-01
[corrected] Treatment of serious life-threatening multi-drug-resistant organisms poses a serious problem due to the limited therapeutic options. Tigecycline has been recently marketed as a broad-spectrum antibiotic with activity against both gram-positive and gram-negative bacteria. Even though many studies have demonstrated the activity of tigecycline against ESBL-producing Enterobacteriaceae, its activity is not well-defined against micro-organisms producing metallo-β-lactamases (MBLs), as there are only a few reports and the number of isolates tested is limited. The aim of the present study was to evaluate the activity of tigecycline against MBL-producing bacterial isolates. The isolates were tested for MBL production by (i) combined-disk test, (ii) double disc synergy test (DDST), (iii) susceptibility to aztreonam (30 μg) disk. Minimum inhibitory concentration to tigecycline was determined according to agar dilution method as per Clinical Laboratory Standards Institute (CLSI) guidelines. Disc diffusion susceptibility testing was also performed for all these isolates using tigecycline (15 μg) discs. Among the total 308 isolates included in the study, 99 were found to be MBL producers. MBL production was observed mostly in isolates from pus samples (40.47%) followed by urine (27.4%) and blood (13.09%). MBL production was observed in E. coli (41.48%), K. pneumoniae (26.67%), Proteus mirabilis (27.78%), Citrobacter spp. (41.67%), Enterobacter spp. (25.08%), and Acinetobacter spp. (27.27%). The result showed that tigecycline activity was unaffected by MBL production and it was showed almost 100% activity against all MBL-producing isolates, with most of the isolates exhibiting an MIC ranging from 0.25-8 μg/ml, except 2 MBL-producing E. coli isolates who had an MIC of 8 μg/ml. To conclude, tigecycline was found to be highly effective against MBL-producing Enterobacteriaceae and acinetobacter isolates, but the presence of resistance among organisms, even before the mass usage of the drug, warrants the need of its usage as a reserve drug. The study also found that the interpretative criteria for the disc diffusion method, recommended by the FDA, correlates well with the MIC detection methods. So, the microbiology laboratories might use the relatively easier method of disc diffusion, as compared to the comparatively tedious method of MIC determination.
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian
2015-07-01
In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.
Barreiro, M M; Grana, D R; Kokubu, G A; Luppo, M I; Mintzer, S; Vigna, G
2010-04-01
Titanium powder production by the hydride-dehydride method has been developed as a non-expensive process. In this work, commercially pure grade two Ti specimens were hydrogenated. The hydrided material was milled in a planetary mill. The hydrided titanium powder was dehydrided and then sieved to obtain a particle size between 37 and 125 microm in order to compare it with a commercial powder produced by chemical reduction with a particle size lower than 150 microm. Cylindrical green compacts were obtained by uniaxial pressing of the powders at 343 MPa and sintering in vacuum. The powders and the density of sintered compacts were characterized, the oxygen content was measured and in vivo tests were performed in the tibia bones of Wistar rats in order to evaluate their biocompatibility. No differences were observed between the materials which were produced either with powders obtained by the hydride-dehydride method or with commercial powders produced by chemical reduction regarding modifications in compactation, sintering and biological behaviour.
Shim, Jihyun; Shin, Yonguk; Lee, Imsang; Kim, So Young
L-Methionine has been used in various industrial applications such as the production of feed and food additives and has been used as a raw material for medical supplies and drugs. It functions not only as an essential amino acid but also as a physiological effector, for example, by inhibiting fat accumulation and enhancing immune response. Producing methionine from fermentation is beneficial in that microorganisms can produce L-methionine selectively using eco-sustainable processes. Nevertheless, the fermentative method has not been used on an industrial scale because it is not competitive economically compared with chemical synthesis methods. Presented are efforts to develop suitable strains, engineered enzymes, and alternative process of producing L-methionine that overcomes problems of conventional fermentation methods. One of the alternative processes is a two-step process in which the L-methionine precursor is produced by fermentation and then converted to L-methionine by enzymes. Directed efforts toward strain development and enhanced enzyme engineering will advance industrial production of L-methionine based on fermentation.
Conventionally cast and forged copper alloy for high-heat-flux thrust chambers
NASA Technical Reports Server (NTRS)
Kazaroff, John M.; Repas, George A.
1987-01-01
The combustion chamber liner of the space shuttle main engine is made of NARloy-Z, a copper-silver-zirconium alloy. This alloy was produced by vacuum melting and vacuum centrifugal casting; a production method that is currently now available. Using conventional melting, casting, and forging methods, NASA has produced an alloy of the same composition called NASA-Z. This report compares the composition, microstructure, tensile properties, low-cycle fatigue life, and hot-firing life of these two materials. The results show that the materials have similar characteristics.
Szczygiel, Edward J; Harte, Janice B; Strasburg, Gale M; Cho, Sungeun
2017-09-01
Food products produced with bean ingredients are gaining in popularity among consumers due to the reported health benefits. Navy bean (Phaseolus vulgaris) powder produced through extrusion can be considered as a resource-efficient alternative to conventional methods, which often involve high water inputs. Therefore, navy bean powders produced with extrusion and conventional methods were assessed for the impact of processing on consumer liking in end-use products and odor-active compounds. Consumer acceptance results reveal significant differences in flavor, texture and overall acceptance scores of several products produced with navy bean powder. Crackers produced with extruded navy bean powder received higher hedonic flavor ratings than those produced with commercial navy bean powder (P < 0.001). GC-O data showed that the commercial powder produced through conventional processing had much greater contents of several aliphatic aldehydes commonly formed via lipid oxidation, such as hexanal, octanal and nonanal with descriptors of 'grassy', 'nutty', 'fruity', 'dusty', and 'cleaner', compared to the extruded powder. Extrusion processed navy bean powders were preferred over commercial powders for certain navy bean powder applications. This is best explained by substantial differences in aroma profiles of the two powders that may have been caused by lipid oxidation. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Ling, Xueping; Guo, Jing; Zheng, Chuqiang; Ye, Chiming; Lu, Yinghua; Pan, Xueshan; Chen, Zhengqi; Ng, I-Son
2015-12-01
Polyunsaturated fatty acids (PUFAs) are valuable ingredients in the food and pharmaceutical products due to their beneficial influence on human health. Most studies paid attention on the production of PUFAs from oleaginous micro-organisms but seldom on the comparative proteomics of cells. In the study, three methods (i.e., cold shock, acetone precipitation and ethanol precipitation) for lipid removal from crude protein extracts were applied in different PUFAs-producing micro-organisms. Among the selective strains, Schizochytrium was used as an oleaginous strain with high lipid of 60.3 (w/w%) in biomass. The Mortierella alpina and Cunninghamella echinulata were chosen as the low-lipid-content strains with 25.8 (w/w%) and 21.8 (w/w%) of lipid in biomass, respectively. The cold shock resulted as the most effective method for lipid removed, thus obtained higher protein amount for Schizochytrium. Moreover, from the comparative proteomics for the three PUFAs-producing strains, it showed more significant proteins of up or down-regulation were explored under cold shock treatment. Therefore, the essential proteins (i.e., polyunsaturated fatty acid synthase) and regulating proteins were observed. In conclusion, this study provides a valuable and practical approach for analysis of high PUFAs-producing strains at the proteomics level, and would further accelerate the understanding of the metabolic flux in oleaginous micro-organisms.
Liu, Jien-Wei; Ko, Wen-Chien; Huang, Cheng-Hua; Liao, Chun-Hsing; Lu, Chin-Te; Chuang, Yin-Ching; Tsao, Shih-Ming; Chen, Yao-Shen; Liu, Yung-Ching; Chen, Wei-Yu; Jang, Tsrang-Neng; Lin, Hsiu-Chen; Chen, Chih-Ming; Shi, Zhi-Yuan; Pan, Sung-Ching; Yang, Jia-Ling; Kung, Hsiang-Chi; Liu, Chun-Eng; Cheng, Yu-Jen; Chen, Yen-Hsu; Lu, Po-Liang; Sun, Wu; Wang, Lih-Shinn; Yu, Kwok-Woon; Chiang, Ping-Cherng; Lee, Ming-Hsun; Lee, Chun-Ming; Hsu, Gwo-Jong
2012-01-01
The Tigecycline In Vitro Surveillance in Taiwan (TIST) study, initiated in 2006, is a nationwide surveillance program designed to longitudinally monitor the in vitro activity of tigecycline against commonly encountered drug-resistant bacteria. This study compared the in vitro activity of tigecycline against 3,014 isolates of clinically important drug-resistant bacteria using the standard broth microdilution and disk diffusion methods. Species studied included methicillin-resistant Staphylococcus aureus (MRSA; n = 759), vancomycin-resistant Enterococcus faecium (VRE; n = 191), extended-spectrum β-lactamase (ESBL)-producing Escherichia coli (n = 602), ESBL-producing Klebsiella pneumoniae (n = 736), and Acinetobacter baumannii (n = 726) that had been collected from patients treated between 2008 and 2010 at 20 hospitals in Taiwan. MICs and inhibition zone diameters were interpreted according to the currently recommended U.S. Food and Drug Administration (FDA) criteria and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria. The MIC90 values of tigecycline against MRSA, VRE, ESBL-producing E. coli, ESBL-producing K. pneumoniae, and A. baumannii were 0.5, 0.125, 0.5, 2, and 8 μg/ml, respectively. The total error rates between the two methods using the FDA criteria were high: 38.4% for ESBL-producing K. pneumoniae and 33.8% for A. baumannii. Using the EUCAST criteria, the total error rate was also high (54.6%) for A. baumannii isolates. The total error rates between these two methods were <5% for MRSA, VRE, and ESBL-producing E. coli. For routine susceptibility testing of ESBL-producing K. pneumoniae and A. baumannii against tigecycline, the broth microdilution method should be used because of the poor correlation of results between these two methods. PMID:22155819
Denoising in digital speckle pattern interferometry using wave atoms.
Federico, Alejandro; Kaufmann, Guillermo H
2007-05-15
We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.
Aggestam, Vivianne; Buick, Jon
2017-08-01
Agricultural industrialisation and globalisation have steadily increased the transportation of food across the world. In efforts to promote sustainability and self-sufficiency, organic milk producers in Sweden are required to produce a higher level of cattle feed on-farm in the hope that increased self-sufficiency will reduce reliance on external inputs and reduce transport-related greenhouse gas emissions. Using data collected from 20 conventional and 20 organic milk producers in Sweden this paper aims to assess the global warming impact of farmyard vehicles and the transportation of feed produced 'off-farm' in order to compare the impact of vehicle-related emissions from the different production methods. The findings show organic and conventional production methods have different vehicle-related emission outputs that vary according to a reliance on either road transportation or increased farmyard machinery use. Mechanical weeding is more fuel demanding than conventional agrichemical sprayers. However, artificial fertilising is one of the highest farmyard vehicle-related emitters. The general findings show organic milk production emits higher levels of farm vehicle-related emissions that fail to be offset by reduced emissions occurring from international transport emissions. This paper does not propose to cover a comprehensive supply chain carbon footprint for milk production or attempt to determine which method of production has the largest climatic impact. However, it does demonstrate that Sweden's legal requirements for organic producers to produce more feed on-farm to reduce transport emissions have brought emissions back within Sweden's greenhouse gas inventory and raises questions around the effectiveness of policies to reduce vehicle-related emissions. Further research is needed into the effectiveness of climate change mitigation on food production policies, in particular looking at various trade-offs that affects the entire food supply chain.
NASA Astrophysics Data System (ADS)
Tingberg, Anders Martin
Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.
Carbon monoxide mixing ratio inference from gas filter radiometer data
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.
1983-01-01
A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.
Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment
ERIC Educational Resources Information Center
Buchwald, Adam; Miozzo, Michele
2012-01-01
Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…
Student Evaluation of Instruction: Comparison between In-Class and Online Methods
ERIC Educational Resources Information Center
Capa-Aydin, Yesim
2016-01-01
This study compares student evaluations of instruction that were collected in-class with those gathered through an online survey. The two modes of administration were compared with respect to response rate, psychometric characteristics and mean ratings through different statistical analyses. Findings indicated that in-class evaluations produced a…
Note: Making tens of centimeter long uniform microfluidic channels using commercial glass pipette
NASA Astrophysics Data System (ADS)
Ou, Neil; Lee, Huang-Ming; Wu, Jong-Ching
2018-03-01
Producing microchannels with diameters between 10 and 20 μm and with lengths in the tens of centimeters is reported. The method can be modified to obtain diameters as narrow as 350 nm. Length-to-diameter aspect ratios that surpass 104 can be produced for a fraction of current production costs. The controllable channel is produced by applying a flame to the narrow end of a commercial pipette that is made from a soda-lime silicate. In combination with a pulling mechanism, applying heat to the composite material lengthens the pipette in a highly uniform way. Given that the materials and methods in this research are cost-effective when compared to femtosecond laser micromachining on 2D silicon-based surfaces, further research into producing microchannels from soda-lime silicates may revolutionize access to 3D controllable microchannels.
Cluster detection methods applied to the Upper Cape Cod cancer data.
Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann
2005-09-15
A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
Certification of a weld produced by friction stir welding
Obaditch, Chris; Grant, Glenn J
2013-10-01
Methods, devices, and systems for providing certification of friction stir welds are disclosed. A sensor is used to collect information related to a friction stir weld. Data from the sensor is compared to threshold values provided by an extrinsic standard setting organizations using a certification engine. The certification engine subsequently produces a report on the certification status of the weld.
Comparison of Dam Breach Parameter Estimators
2008-01-01
of the methods, when used in the HEC - RAS simulation model , produced comparable results. The methods tested suggest use of ...characteristics of a dam breach, use of those parameters within the unsteady flow routing model HEC - RAS , and the computation and display of the resulting...implementation of these breach parameters in
Pretest probability assessment derived from attribute matching
Kline, Jeffrey A; Johnson, Charles L; Pollack, Charles V; Diercks, Deborah B; Hollander, Judd E; Newgard, Craig D; Garvey, J Lee
2005-01-01
Background Pretest probability (PTP) assessment plays a central role in diagnosis. This report compares a novel attribute-matching method to generate a PTP for acute coronary syndrome (ACS). We compare the new method with a validated logistic regression equation (LRE). Methods Eight clinical variables (attributes) were chosen by classification and regression tree analysis of a prospectively collected reference database of 14,796 emergency department (ED) patients evaluated for possible ACS. For attribute matching, a computer program identifies patients within the database who have the exact profile defined by clinician input of the eight attributes. The novel method was compared with the LRE for ability to produce PTP estimation <2% in a validation set of 8,120 patients evaluated for possible ACS and did not have ST segment elevation on ECG. 1,061 patients were excluded prior to validation analysis because of ST-segment elevation (713), missing data (77) or being lost to follow-up (271). Results In the validation set, attribute matching produced 267 unique PTP estimates [median PTP value 6%, 1st–3rd quartile 1–10%] compared with the LRE, which produced 96 unique PTP estimates [median 24%, 1st–3rd quartile 10–30%]. The areas under the receiver operating characteristic curves were 0.74 (95% CI 0.65 to 0.82) for the attribute matching curve and 0.68 (95% CI 0.62 to 0.77) for LRE. The attribute matching system categorized 1,670 (24%, 95% CI = 23–25%) patients as having a PTP < 2.0%; 28 developed ACS (1.7% 95% CI = 1.1–2.4%). The LRE categorized 244 (4%, 95% CI = 3–4%) with PTP < 2.0%; four developed ACS (1.6%, 95% CI = 0.4–4.1%). Conclusion Attribute matching estimated a very low PTP for ACS in a significantly larger proportion of ED patients compared with a validated LRE. PMID:16095534
Strain Prioritization for Natural Product Discovery by a High-Throughput Real-Time PCR Method
2015-01-01
Natural products offer unmatched chemical and structural diversity compared to other small-molecule libraries, but traditional natural product discovery programs are not sustainable, demanding too much time, effort, and resources. Here we report a strain prioritization method for natural product discovery. Central to the method is the application of real-time PCR, targeting genes characteristic to the biosynthetic machinery of natural products with distinct scaffolds in a high-throughput format. The practicality and effectiveness of the method were showcased by prioritizing 1911 actinomycete strains for diterpenoid discovery. A total of 488 potential diterpenoid producers were identified, among which six were confirmed as platensimycin and platencin dual producers and one as a viguiepinol and oxaloterpin producer. While the method as described is most appropriate to prioritize strains for discovering specific natural products, variations of this method should be applicable to the discovery of other classes of natural products. Applications of genome sequencing and genome mining to the high-priority strains could essentially eliminate the chance elements from traditional discovery programs and fundamentally change how natural products are discovered. PMID:25238028
You, Qiushi; Li, Qingqing; Zheng, Hailing; Hu, Zhiwen; Zhou, Yang; Wang, Bing
2017-09-06
Recently, much interest has been paid to the separation of silk produced by Bombyx mori from silk produced by other species and tracing the beginnings of silk cultivation from wild silk exploitation. In this paper, significant differences between silks from Bombyx mori and other species were found by microscopy and spectroscopy, such as morphology, secondary structure, and amino acid composition. For further accurate identification, a diagnostic antibody was designed by comparing the peptide sequences of silks produced by Bombyx mori and other species. The results of the noncompetitive indirect enzyme-linked immunosorbent assay (ELISA) indicated that the antibody that showed good sensitivity and high specificity can definitely discern silk produced by Bombyx mori from silk produced by wild species. Thus, the antibody-based immunoassay has the potential to be a powerful tool for tracing the beginnings of silk cultivation. In addition, combining the sensitive, specific, and convenient ELISA technology with other conventional methods can provide more in-depth and accurate information for species identification.
Financing Alternatives Comparison Tool
FACT is a financial analysis tool that helps identify the most cost-effective method to fund a wastewater or drinking water management project. It produces a comprehensive analysis that compares various financing options.
MRL and SuperFine+MRL: new supertree methods
2012-01-01
Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525
Predicting photoyellowing behaviour of mechanical pulp containing papers
Umesh P. Agarwal
2005-01-01
It is well known that paper produced from mechanical-pulp-containing fiber furnish yellows upon exposure to light. Although the accelerated light-aging test method has been used to compare papers and predict long term performance, the reliability of the light-aging method has been questioned. Therefore, a method that can correctly predict a paperâs light stability is...
A comparison of carbon stock estimates and projections for the northeastern United States
Richard G. MacLean; Mark J. Ducey; Coeli M. Hoover
2014-01-01
We conducted a comparison of carbon stock estimates produced by three different methods using regional data from the USDA Forest Service Forest Inventory and Analysis (FIA). Two methods incorporated by the Forest Vegetation Simulator (FVS) were compared to each other and to the current FIA component ratio method. We also examined the uncalibrated performance of FVS...
Method of Curved Models and Its Application to the Study of Curvilinear Flight of Airships. Part II
NASA Technical Reports Server (NTRS)
Gourjienko, G A
1937-01-01
This report compares the results obtained by the aid of curved models with the results of tests made by the method of damped oscillations, and with flight tests. Consequently we shall be able to judge which method of testing in the tunnel produces results that are in closer agreement with flight test results.
NASA Astrophysics Data System (ADS)
Knížek, Antonín; Dryahina, Ksenyia; Španěl, Patrik; Kubelík, Petr; Kavan, Ladislav; Zukalová, Markéta; Ferus, Martin; Civiš, Svatopluk
2018-06-01
The era of fossil fuels is slowly nearing its inevitable end and the urgency of alternative energy sources basic research, exploration and testing becomes ever more important. Storage and alternative production of energy from fuels, such as methane, represents one of the many alternative approaches. Natural gas containing methane represents a powerful source of energy producing large volume of greenhouse gases. However, methane can be also produced in closed, CO2-neutral cycles. In our study, we compare detailed chemical composition of CH4 fuel produced in two different processes: Classical production of biogas in a rendering station, industrial wastewater treatment station and landfill gas station together with novel approach of artificial photosynthesis from CO2 over acidic anatase TiO2 in experimental apparatus developed in our laboratory. The analysis of CH4 fuel produced in these processes is important. Trace gaseous traces can be for example corrosive or toxic, low quality of the mixture suppresses effectivity of energy production, etc. In this analysis, we present a combination of two methods: High resolution Fourier transform infrared spectroscopy (HR-FTIR) suitable for the main component analysis; and the complementary extremely sensitive method of Selected Ion Flow Tube Mass Spectrometry (SIFT-MS) and gas chromatography (GC-MS), which are in turn best suited for trace analysis. The combination of these methods provides more information than any single of them would be able to and promises a new possible analytical approach to fuel and gaseous mixture analysis.
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
Larouche, Danielle; Cantin-Warren, Laurence; Desgagné, Maxime; Guignard, Rina; Martel, Israël; Ayoub, Akram; Lavoie, Amélie; Gauvin, Robert; Auger, François A.; Moulin, Véronique J.; Germain, Lucie
2016-01-01
Abstract There is a clinical need for skin substitutes to replace full-thickness skin loss. Our group has developed a bilayered skin substitute produced from the patient's own fibroblasts and keratinocytes referred to as Self-Assembled Skin Substitute (SASS). After cell isolation and expansion, the current time required to produce SASS is 45 days. We aimed to optimize the manufacturing process to standardize the production of SASS and to reduce production time. The new approach consisted in seeding keratinocytes on a fibroblast-derived tissue sheet before its detachment from the culture plate. Four days following keratinocyte seeding, the resulting tissue was stacked on two fibroblast-derived tissue sheets and cultured at the air–liquid interface for 10 days. The resulting total production time was 31 days. An alternative method adapted to more contractile fibroblasts was also developed. It consisted in adding a peripheral frame before seeding fibroblasts in the culture plate. SASSs produced by both new methods shared similar histology, contractile behavior in vitro and in vivo evolution after grafting onto mice when compared with SASSs produced by the 45-day standard method. In conclusion, the new approach for the production of high-quality human skin substitutes should allow an earlier autologous grafting for the treatment of severely burned patients. PMID:27872793
Magnesium stearine production via direct reaction of palm stearine and magnesium hydroxide
NASA Astrophysics Data System (ADS)
Pratiwi, M.; Ylitervo, P.; Pettersson, A.; Prakoso, T.; Soerawidjaja, T. H.
2017-06-01
The fossil oil production could not compensate with the increase of its consumption, because of this reason the renewable alternative energy source is needed to meet this requirement of this fuel. One of the methods to produce hydrocarbon is by decarboxylation of fatty acids. Vegetable oil and fats are the greatest source of fatty acids, so these can be used as raw material for biohydrocarbon production. From other researchers on their past researchs, by heating base soap from divalent metal, those metal salts will decarboxylate and produce hydrocarbon. This study investigate the process and characterization of magnesium soaps from palm stearine by Blachford method. The metal soaps are synthesized by direct reaction of palm stearine and magnesium hydroxide to produce magnesium stearine and magnesium stearine base soaps at 140-180°C and 6-10 bar for 3-6 hours. The operation process which succeed to gain metal soaps is 180°C, 10 bar, for 3-6 hours. These metal soaps are then compared with commercial magnesium stearate. Based on Thermogravimetry Analysis (TGA) results, the decomposition temperature of all the metal soaps were 250°C. Scanning Electron Microscope with Energy Dispersive X-ray (SEM-EDX) analysis have shown the traces of sodium sulphate for magnesium stearate commercial and magnesium hydroxide for both type of magnesium stearine soaps. The analysis results from Microwave Plasma-Atomic Emission Spectrometry (MP-AES) have shown that the magnesium content of magnesium stearine approximate with magnesium stearate commercial and lower compare with magnesium stearine base soaps. These experiments suggest that the presented saponification process method could produced metal soaps comparable with the commercial metal soaps.
Method for detecting coliform organisms
NASA Technical Reports Server (NTRS)
Nishioka, K.; Nibley, D. A.; Jeffers, E. L.; Brooks, R. L. (Inventor)
1983-01-01
A method and apparatus are disclosed for determining the concentration of coliform bacteria in a sample. The sample containing the coliform bacteria is cultured in a liquid growth medium. The cultured bacteria produce hydrogen and the hydrogen is vented to a second cell containing a buffer solution in which the hydrogen dissolves. By measuring the potential change in the buffer solution caused by the hydrogen, as a function of time, the initial concentration of bacteria in the sample is determined. Alternatively, the potential change in the buffer solution can be compared with the potential change in the liquid growth medium to verify that the potential change in the liquid growth medium is produced primarily by the hydrogen gas produced by the coliform bacteria.
A method to determine agro-climatic zones based on correlation and cluster analyses
NASA Astrophysics Data System (ADS)
Borges Valeriano, Taynara Tuany; de Souza Rolim, Glauco; de Oliveira Aparecido, Lucas Eduardo
2017-12-01
Determining agro-climatic zones (ACZs) is traditionally made by cross-comparing meteorological elements such as air temperature, rainfall, and water deficit (DEF). This study proposes a new method based on correlations between monthly DEFs during the crop cycle and annual yield and performs a multivariate cluster analysis on these correlations. This `correlation method' was applied to all municipalities in the state of São Paulo to determine ACZs for coffee plantations. A traditional ACZ method for coffee, which is based on temperature and DEF ranges (Evangelista et al.; RBEAA, 6:445-452, 2002), was applied to the study area to compare against the correlation method. The traditional ACZ classified the "Alta Mogina," "Média Mogiana," and "Garça and Marília" regions as traditional coffee regions that were either suitable or even restricted for coffee plantations. These traditional regions have produced coffee since 1800 and should not be classified as restricted. The correlation method classified those areas as high-producing regions and expanded them into other areas. The proposed method is innovative, because it is more detailed than common ACZ methods. Each developmental crop phase was analyzed based on correlations between the monthly DEF and yield, improving the importance of crop physiology in relation to climate.
Method and apparatus for fault tolerance
NASA Technical Reports Server (NTRS)
Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)
1993-01-01
A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganeev, R. A., E-mail: rashid-ganeev@mail.ru; Physical Department, Voronezh State University, Voronezh 394006
We compare the resonance-induced enhancement of single harmonic and the quasi-phase-matching-induced enhancement of the group of harmonics during propagation of the tunable mid-infrared femtosecond pulses through the perforated laser-produced indium plasma. We show that the enhancement of harmonics using the macro-process of quasi-phase-matching is comparable with the one using micro-process of resonantly enhanced harmonic. These studies show that joint implementation of the two methods of the increase of harmonic yield could be a useful tool for generation of strong short-wavelength radiation in different spectral regions. We compare these effects in indium, as well as in other plasmas.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Comarison of Four Methods for Teaching Phases of the Moon
NASA Astrophysics Data System (ADS)
Upton, Brianna; Cid, Ximena; Lopez, Ramon
2008-03-01
Previous studies have shown that many students have misconceptions about basic concepts in astronomy. As a consequence, various interactive engagement methods have been developed for introductory astronomy. We will present the results of a study that compares four different teaching methods for the subject of the phases of the Moon, which is well known to produce student difficulties. We compare a fairly traditional didactic approach, the use of manipulatives (moonballs) in lecture, the University of Arizona Lecture Tutorials, and an interactive computer program used in a didactic fashion. We use pre- and post-testing with the Lunar Phase Concept Inventory to determine the relative effectiveness of these methods.
Martin-StPaul, N K; Longepierre, D; Huc, R; Delzon, S; Burlett, R; Joffre, R; Rambal, S; Cochard, H
2014-08-01
Three methods are in widespread use to build vulnerability curves (VCs) to cavitation. The bench drying (BD) method is considered as a reference because embolism and xylem pressure are measured on large branches dehydrating in the air, in conditions similar to what happens in nature. Two other methods of embolism induction have been increasingly used. While the Cavitron (CA) uses centrifugal force to induce embolism, in the air injection (AI) method embolism is induced by forcing pressurized air to enter a stem segment. Recent studies have suggested that the AI and CA methods are inappropriate in long-vesselled species because they produce a very high-threshold xylem pressure for embolism (e.g., P50) compared with what is expected from (i) their ecophysiology in the field (native embolism, water potential and stomatal response to xylem pressure) and (ii) the P50 obtained with the BD method. However, other authors have argued that the CA and AI methods may be valid because they produce VCs similar to the BD method. In order to clarify this issue, we assessed VCs with the three above-mentioned methods on the long-vesselled Quercus ilex L. We showed that the BD VC yielded threshold xylem pressure for embolism consistent with in situ measurements of native embolism, minimal water potential and stomatal conductance. We therefore concluded that the BD method provides a reliable estimate of the VC for this species. The CA method produced a very high P50 (i.e., less negative) compared with the BD method, which is consistent with an artifact related to the vessel length. The VCs obtained with the AI method were highly variable, producing P50 ranging from -2 to -8.2 MPa. This wide variability was more related to differences in base diameter among samples than to differences in the length of samples. We concluded that this method is probably subject to an artifact linked to the distribution of vessel lengths within the sample. Overall, our results indicate that the CA and the AI should be used with extreme caution on long-vesselled species. Our results also highlight that several criteria may be helpful to assess the validity of a VC. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Oxygen production on the Lunar materials processing frontier
NASA Technical Reports Server (NTRS)
Altenberg, Barbara H.
1992-01-01
During the pre-conceptual design phase of an initial lunar oxygen processing facility, it is essential to identify and compare the available processes and evaluate them in order to ensure the success of such an endeavor. The focus of this paper is to provide an overview of materials processing to produce lunar oxygen as one part of a given scenario of a developing lunar occupation. More than twenty-five techniques to produce oxygen from lunar materials have been identified. While it is important to continue research on any feasible method, not all methods can be implemented at the initial lunar facility. Hence, it is necessary during the pre-conceptual design phase to evaluate all methods and determine the leading processes for initial focus. Researchers have developed techniques for evaluating the numerous proposed methods in order to suggest which processes would be best to go to the Moon first. As one section in this paper, the recent evaluation procedures that have been presented in the literature are compared and contrasted. In general, the production methods for lunar oxygen fall into four categories: thermochemical, reactive solvent, pyrolytic, and electrochemical. Examples from two of the four categories are described, operating characteristics are contrasted, and terrestrial analogs are presented when possible. In addition to producing oxygen for use as a propellant and for life support, valuable co-products can be derived from some of the processes. This information is also highlighted in the description of a given process.
Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei
2017-01-01
This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L16(45) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm3 and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments. PMID:29125576
Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei
2017-11-10
This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L 16 (4⁵) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm³ and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Synthesis of Graphite Oxide with Different Surface Oxygen Contents Assisted Microwave Radiation
Ibarra-Hernández, Adriana
2018-01-01
Graphite oxide is synthesized via oxidation reaction using oxidant compounds that have lattice defects by the incorporation of unlike functional groups. Herein, we report the synthesis of the graphite oxide with diverse surface oxygen content through three (B, C, D) different modified versions of the Hummers method assisted microwave radiation compared with the conventional graphite oxide sample obtained by Hummers method (A). These methods allow not only the production of graphite oxide but also reduced graphene oxide, without undergoing chemical, thermal, or mechanical reduction steps. The values obtained of C/O ratio were ~2, 3.4, and ~8.5 for methodologies C, B, and D, respectively, indicating the presence of graphite oxide and reduced graphene oxide, according to X-ray photoelectron spectroscopy. Raman spectroscopy of method D shows the fewest structural defects compared to the other methodologies. The results obtained suggest that the permanganate ion produces reducing species during graphite oxidation. The generation of these species is attributed to a reversible reaction between the permanganate ion with π electrons, ions, and radicals produced after treatment with microwave radiation. PMID:29438280
Zarei, Omid; Dastmalchi, Siavoush; Hamzeh-Mivehroud, Maryam
2016-01-01
Yeasts, especially Saccharomyces cerevisiae, are one of the oldest organisms with broad spectrum of applications, owing to their unique genetics and physiology. Yeast extract, i.e. the product of yeast cells, is extensively used as nutritional resource in bacterial culture media. The aim of this study was to develop a simple, rapid and cost benefit process to produce the yeast extract. In this procedure mechanical methods such as high temperature and pressure were utilized to produce the yeast extract. The growth of the bacteria feed with the produced yeast extract was monitored in order to assess the quality of the product. The results showed that the quality of the produced yeast extract was very promising concluded from the growth pattern of bacterial cells in media prepared from this product and was comparable with that of the three commercial yeast extracts in terms of bacterial growth properties. One of the main advantages of the current method was that no chemicals and enzymes were used, leading to the reduced production cost. The method is very simple and cost effective, and can be performed in a reasonable time making it suitable for being adopted by research laboratories. Furthermore, it can be scaled up to produce large quantities for industrial applications. PMID:28243289
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjessum, K.; Stegeman, J.J.
1979-10-15
Addition of primary organic amines, such as n-butylamine, to the mobile phase altered the capacity factors and selectivity of benzo(a)pyrene metabolites obtained with reverse-phase high pressure liquid chromatography on an ODS column. Separation of benzo(a)pyrene phenols in particular was improved with 8 of the 10 available metabolites resolved, including those known to be biologically produced. The method offers sufficiently improved resolution or convenience that it should prove useful in comparative studies of metabolism of benzo(a)-pyrene and other polynuclear aromatic hydrocarbons. Applying the method to analysis of benzo(a)pyrene metabolites produced in vitro by hepatic microsomes from the marine fish Stenotomus versicolormore » indicated the principal phenolic derivatives produced by this fish were 1-hydroxy-, 3-hydroxy-, 7-hydroxy-, and 9-hydroxybenzo(a)pyrene.« less
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
Peña, Antonio; Sánchez, Norma Silvia; Calahorra, Martha
2010-10-01
Different methods to estimate the plasma membrane potential difference (PMP) of yeast cells with fluorescent monitors were compared. The validity of the methods was tested by the fluorescence difference with or without glucose, and its decrease by the addition of 10 mM KCl. Low CaCl₂ concentrations avoid binding of the dye to the cell surface, and low CCCP concentrations avoid its accumulation by mitochondria. Lower concentrations of Ba²+ produce a similar effect as Ca²+, without producing the fluorescence changes derived from its transport. Fluorescence changes without considering binding of the dyes to the cells and accumulation by mitochondria are overshadowed by their distribution between this organelle and the cytoplasm. Other factors, such as yeast starvation, dye used, parameters of the fluorescence changes, as well as buffers and incubation times were analyzed. An additional approach to measure the actual or relative values of PMP, determining the accumulation of the dye, is presented.
Method and apparatus for filtering visual documents
NASA Technical Reports Server (NTRS)
Rorvig, Mark E. (Inventor); Shelton, Robert O. (Inventor)
1993-01-01
A method and apparatus for producing an abstract or condensed version of a visual document is presented. The frames comprising the visual document are first sampled to reduce the number of frames required for processing. The frames are then subjected to a structural decomposition process that reduces all information in each frame to a set of values. These values are in turn normalized and further combined to produce only one information content value per frame. The information content values of these frames are then compared to a selected distribution cutoff point. This effectively selects those values at the tails of a normal distribution, thus filtering key frames from their surrounding frames. The value for each frame is then compared with the value from the previous frame, and the respective frame is finally stored only if the values are significantly different. The method filters or compresses a visual document with a reduction in digital storage on the ratio of up to 700 to 1 or more, depending on the content of the visual document being filtered.
O'Hara, F. Patrick; Suaya, Jose A.; Ray, G. Thomas; Baxter, Roger; Brown, Megan L.; Mera, Robertino M.; Close, Nicole M.; Thomas, Elizabeth
2016-01-01
A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants. PMID:26669861
O'Hara, F Patrick; Suaya, Jose A; Ray, G Thomas; Baxter, Roger; Brown, Megan L; Mera, Robertino M; Close, Nicole M; Thomas, Elizabeth; Amrine-Madsen, Heather
2016-01-01
A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Green light emitting curcumin dye in organic solvents
NASA Astrophysics Data System (ADS)
Mubeen, Mohammad; Deshmukh, Abhay D.; Dhoble, S. J.
2018-05-01
In this modern world, the demand for the white light emission has increased because of its wide applications in various display and lighting devices, sensors etc. This white light can be produced by mixing red, green and blue lights. Thus this green light can be produced from the plant extract i.e., Turmeric. Curcumin is the essential element present in turmeric to generate the green light. The Photoluminescence (PL) emission is observed at 540 nm at 380nm excitation. This method of generating green light is very simple, cost effective and efficient when compared to other methods.
Approximate string matching algorithms for limited-vocabulary OCR output correction
NASA Astrophysics Data System (ADS)
Lasko, Thomas A.; Hauser, Susan E.
2000-12-01
Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.
Methods of pretreating comminuted cellulosic material with carbonate-containing solutions
Francis, Raymond
2012-11-06
Methods of pretreating comminuted cellulosic material with an acidic solution and then a carbonate-containing solution to produce a pretreated cellulosic material are provided. The pretreated material may then be further treated in a pulping process, for example, a soda-anthraquinone pulping process, to produce a cellulose pulp. The pretreatment solutions may be extracted from the pretreated cellulose material and selectively re-used, for example, with acid or alkali addition, for the pretreatment solutions. The resulting cellulose pulp is characterized by having reduced lignin content and increased yield compared to prior art treatment processes.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Crystallization Methods for Preparation of Nanocrystals for Drug Delivery System.
Gao, Yuan; Wang, Jingkang; Wang, Yongli; Yin, Qiuxiang; Glennon, Brian; Zhong, Jian; Ouyang, Jinbo; Huang, Xin; Hao, Hongxun
2015-01-01
Low water solubility of drug products causes delivery problems such as low bioavailability. The reduced particle size and increased surface area of nanocrystals lead to the increasing of the dissolution rate. The formulation of drug nanocrystals is a robust approach and has been widely applied to drug delivery system (DDS) due to the significant development of nanoscience and nanotechnology. It can be used to improve drug efficacy, provide targeted delivery and minimize side-effects. Crystallization is the main and efficient unit operation to produce nanocrystals. Both traditional crystallization methods such as reactive crystallization, anti-solvent crystallization and new crystallization methods such as supercritical fluid crystallization, high-gravity controlled precipitation can be used to produce nanocrystals. The current mini-review outlines the main crystallization methods addressed in literature. The advantages and disadvantages of each method were summarized and compared.
Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm
2014-11-01
experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The
Winter bird population studies and project prairie birds for surveying grassland birds
Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.
2008-01-01
We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.
Lewis, Mary E; Gowland, Rebecca
2007-09-01
This study compares the infant mortality profiles of 128 infants from two urban and two rural cemetery sites in medieval England. The aim of this paper is to assess the impact of urbanization and industrialization in terms of endogenous or exogenous causes of death. In order to undertake this analysis, two different methods of estimating gestational age from long bone lengths were used: a traditional regression method and a Bayesian method. The regression method tended to produce more marked peaks at 38 weeks, while the Bayesian method produced a broader range of ages and were more comparable with the expected "natural" mortality profiles.At all the sites, neonatal mortality (28-40 weeks) outweighed post-neonatal mortality (41-48 weeks) with rural Raunds Furnells in Northamptonshire, showing the highest number of neonatal deaths and post-medieval Spitalfields, London, showing a greater proportion of deaths due to exogenous or environmental factors. Of the four sites under study, Wharram Percy in Yorkshire showed the most convincing "natural" infant mortality profile, suggesting the inclusion of all births at the site (i.e., stillbirths and unbaptised infants). (c) 2007 Wiley-Liss, Inc.
A robust, efficient equidistribution 2D grid generation method
NASA Astrophysics Data System (ADS)
Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni
2007-11-01
We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).
Effects of Synthesis Method on Electrical Properties of Graphene
NASA Astrophysics Data System (ADS)
Fuad, M. F. I. Ahmad; Jarni, H. H.; Shariffudin, W. N.; Othman, N. H.; Rahim, A. N. Che Abdul
2018-05-01
The aim of this study is to achieve the highest reduction capability and complete reductions of oxygen from graphene oxide (GO) by using different type of chemical methods. The modification of Hummer’s method has been proposed to produce GO, and hydrazine hydrate has been utilized in the GO’s reduction process into graphene. There are two types of chemical method are used to synthesize graphene; 1) Sina’s method and 2) Sasha’s method. Both GO and graphene were then characterized using X-Ray Powder Diffraction (XRD) and Fourier Transform Infrared Spectrometry (FT-IR). The graph patterns obtained from XRD showed that the values of graphene and GO are within their reliable ranges, FT-IR identified the comparison functional group between GO and graphene. Graphene was verified to experience the reduction process due to absent of functional group consist of oxygen has detected. Electrochemical impedance spectrometry (EIS) was then conducted to test the ability of conducting electricity of two batches (each weighted 1.6g) of graphene synthesized using different methods (Sina’s method and Sasha’s method). Sasha’s method was proven to have lower conductivity value compare to Sina’s method, with value of 6.2E+02 S/m and 8.1E+02 S/m respectively. These values show that both methods produced good graphene; however, by using Sina’s method, the graphene produced has better electrical properties.
Lu, Fletcher; Lemonde, Manon
2013-12-01
The objective of this study was to assess if online teaching delivery produces comparable student test performance as the traditional face-to-face approach irrespective of academic aptitude. This study involves a quasi-experimental comparison of student performance in an undergraduate health science statistics course partitioned in two ways. The first partition involves one group of students taught with a traditional face-to-face classroom approach and the other through a completely online instructional approach. The second partition of the subjects categorized the academic aptitude of the students into groups of higher and lower academically performing based on their assignment grades during the course. Controls that were placed on the study to reduce the possibility of confounding variables were: the same instructor taught both groups covering the same subject information, using the same assessment methods and delivered over the same period of time. The results of this study indicate that online teaching delivery is as effective as a traditional face-to-face approach in terms of producing comparable student test performance but only if the student is academically higher performing. For academically lower performing students, the online delivery method produced significantly poorer student test results compared to those lower performing students taught in a traditional face-to-face environment.
Vermicompost derived from different feedstocks as a plant growth medium.
Warman, P R; Anglopez, M J
2010-06-01
This study determined feedstock effects on earthworm populations and the quality of resulting vermicomposts produced from different types of feedstocks using different vermicomposting durations. Feedstock combinations (Kitchen Paper Waste (KPW), Kitchen Yard Waste (KYW), Cattle Manure Yard Waste (CMY)), three durations of vermicomposting (45, 68 or 90 days), and two seed germination methods (with two concentrations of vermicompost) for radish, marigold and upland cress, served as the independent variables. The worms (Eisenia fetida) doubled their weight by day 68 in KPW and CMY vermicomposts and day 90 KPW vermicompost produced the greatest weight of worms. The direct seed germination method (seeding into soil or vermicompost-soil mixtures) indicated that KPW and KYW feedstocks decreased germination compared to the control, even in mature vermicompost. Seed germination was greater in the water extract method; however, most of the vermicompost extracts suppressed germination of the three seed species compared to the water controls. Vermicomposts from all three feedstocks increased leaf area and biomass compared to the control, especially in the 10% vermicompost:soil mix. Thus, seed germination and leaf area or plant biomass for these three species are contrasting vermicompost quality indicators. (c) 2010 Elsevier Ltd. All rights reserved.
Mind-to-paper is an effective method for scientific writing.
Rosenberg, Jacob; Burcharth, Jakob; Pommergaard, Hans Christian; Danielsen, Anne Kjærgaard
2013-03-01
The problem of initiating the writing process is a well-known phenomenon, especially for young and inexperienced scientists. The purpose of this paper is to present an effective method to overcome this problem and increase writing efficiency among inexperienced scientists. Twelve young scientists within the medical/surgical fields were introduced to the mind-to-paper concept. The first and last article drafts produced by each of the scientists were scored for language complexity (LIX number, Flesch Reading Ease Scale and Gunning Fog), flow, structure, length and use of references; and the results were compared. All participants produced one full article draft during each of the three dictation days. When comparing the first and last article draft regarding time used, no significant difference was detected. In general, the manuscripts were of high quality on all evaluated parameters, but language complexity had increased in the final manuscript. Mind-to-paper dictation for scientific writing is an effective method for production of scientific papers of good initial quality, even when used for the first time by inexperienced scientists. We conclude that practicing this concept produces papers of an adequate language complexity, and that dictation as a writing tool allows for fast transfer of ideas and thoughts to written text. not relevant. not relevant.
NASA Technical Reports Server (NTRS)
Pfouts, W. R.; Shamblen, C. E.; Mosier, J. S.; Peebles, R. E.; Gorsler, R. W.
1979-01-01
An attempt was made to improve methods for producing powder metallurgy aircraft gas turbine engine parts from the nickel base superalloy known as Rene 95. The parts produced were the high pressure turbine aft shaft for the CF6-50 engine and the stages 5 through 9 compressor disk forgings for the CFM56/F101 engines. A 50% cost reduction was achieved as compared to conventional cast and wrought processing practices. An integrated effort involving several powder producers and a major forging source were included.
Surface code implementation of block code state distillation.
Fowler, Austin G; Devitt, Simon J; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Surface code implementation of block code state distillation
Fowler, Austin G.; Devitt, Simon J.; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868
NASA Astrophysics Data System (ADS)
Moey, Siah Watt; Abdullah, Aminah; Ahmad, Ishak
2014-09-01
A new patent pending process is proposed in this study to produce edible film directly from seaweed (Kappaphycus alvarezii). Seaweed together with other ingredients had been used to produce the film through casting technique. Physical and mechanical tests were performed on the edible film to examine the thickness, colour, transparency, solubility, tensile strength, elongation at break, water permeability rate, oxygen permeability rate and surface morphology. The produced film was transparent, stretchable, sealable and have basic properties for applications in food, pharmaceutical, cosmetic, toiletries and also agricultural industries. Edible film was successfully developed directly from dry seaweed instead of using alginate and carrageenan. The edible film processing method developed in this research was easier and cheaper compared with the method by using alginate and carrageenan.
Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor
Sheu, Jonathan; Beltzer, Jim; Fury, Brian; Wilczek, Katarzyna; Tobin, Steve; Falconer, Danny; Nolta, Jan; Bauer, Gerhard
2015-01-01
Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs), we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s) and in 10-layer cell factories (CF10s), while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation. PMID:26151065
Graphene oxide and H2 production from bioelectrochemical graphite oxidation.
Lu, Lu; Zeng, Cuiping; Wang, Luda; Yin, Xiaobo; Jin, Song; Lu, Anhuai; Jason Ren, Zhiyong
2015-11-17
Graphene oxide (GO) is an emerging material for energy and environmental applications, but it has been primarily produced using chemical processes involving high energy consumption and hazardous chemicals. In this study, we reported a new bioelectrochemical method to produce GO from graphite under ambient conditions without chemical amendments, value-added organic compounds and high rate H2 were also produced. Compared with abiotic electrochemical electrolysis control, the microbial assisted graphite oxidation produced high rate of graphite oxide and graphene oxide (BEGO) sheets, CO2, and current at lower applied voltage. The resultant electrons are transferred to a biocathode, where H2 and organic compounds are produced by microbial reduction of protons and CO2, respectively, a process known as microbial electrosynthesis (MES). Pseudomonas is the dominant population on the anode, while abundant anaerobic solvent-producing bacteria Clostridium carboxidivorans is likely responsible for electrosynthesis on the cathode. Oxygen production through water electrolysis was not detected on the anode due to the presence of facultative and aerobic bacteria as O2 sinkers. This new method provides a sustainable route for producing graphene materials and renewable H2 at low cost, and it may stimulate a new area of research in MES.
Graphene oxide and H2 production from bioelectrochemical graphite oxidation
Lu, Lu; Zeng, Cuiping; Wang, Luda; Yin, Xiaobo; Jin, Song; Lu, Anhuai; Jason Ren, Zhiyong
2015-01-01
Graphene oxide (GO) is an emerging material for energy and environmental applications, but it has been primarily produced using chemical processes involving high energy consumption and hazardous chemicals. In this study, we reported a new bioelectrochemical method to produce GO from graphite under ambient conditions without chemical amendments, value-added organic compounds and high rate H2 were also produced. Compared with abiotic electrochemical electrolysis control, the microbial assisted graphite oxidation produced high rate of graphite oxide and graphene oxide (BEGO) sheets, CO2, and current at lower applied voltage. The resultant electrons are transferred to a biocathode, where H2 and organic compounds are produced by microbial reduction of protons and CO2, respectively, a process known as microbial electrosynthesis (MES). Pseudomonas is the dominant population on the anode, while abundant anaerobic solvent-producing bacteria Clostridium carboxidivorans is likely responsible for electrosynthesis on the cathode. Oxygen production through water electrolysis was not detected on the anode due to the presence of facultative and aerobic bacteria as O2 sinkers. This new method provides a sustainable route for producing graphene materials and renewable H2 at low cost, and it may stimulate a new area of research in MES. PMID:26573014
A Rapid and Efficient Screening Method for Antibacterial Compound-Producing Bacteria.
Hettiarachchi, Sachithra; Lee, Su-Jin; Lee, Youngdeuk; Kwon, Young-Kyung; De Zoysa, Mahanama; Moon, Song; Jo, Eunyoung; Kim, Taeho; Kang, Do-Hyung; Heo, Soo-Jin; Oh, Chulhong
2017-08-28
Antibacterial compounds are widely used in the treatment of human and animal diseases. The overuse of antibiotics has led to a rapid rise in the prevalence of drug-resistant bacteria, making the development of new antibacterial compounds essential. This study focused on developing a fast and easy method for identifying marine bacteria that produce antibiotic compounds. Eight randomly selected marine target bacterial species ( Agrococcus terreus, Bacillus algicola, Mesoflavibacter zeaxanthinifaciens, Pseudoalteromonas flavipulchra, P. peptidolytica, P. piscicida, P. rubra , and Zunongwangia atlantica ) were tested for production of antibacterial compounds against four strains of test bacteria ( B. cereus, B. subtilis, Halomonas smyrnensis , and Vibrio alginolyticus ). Colony picking was used as the primary screening method. Clear zones were observed around colonies of P. flavipulchra, P. peptidolytica, P. piscicida , and P. rubra tested against B. cereus, B. subtilis , and H. smyrnensis . The efficiency of colony scraping and broth culture methods for antimicrobial compound extraction was also compared using a disk diffusion assay. P. peptidolytica, P. piscicida , and P. rubra showed antagonistic activity against H. smyrnensis, B. cereus , and B. subtilis , respectively, only in the colony scraping method. Our results show that colony picking and colony scraping are effective, quick, and easy methods of screening for antibacterial compound-producing bacteria.
GaAs thin films and methods of making and using the same
Boettcher, Shannon; Ritenour, Andrew; Boucher, Jason; Greenaway, Ann
2016-06-14
Disclosed herein are embodiments of methods for making GaAs thin films, such as photovoltaic GaAs thin films. The methods disclosed herein utilize sources, precursors, and reagents that do not produce (or require) toxic gas and that are readily available and relatively low in cost. In some embodiments, the methods are readily scalable for industrial applications and can provide GaAs thin films having properties that are at least comparable to or potentially superior to GaAs films obtained from conventional methods.
Fayyazi, E; Ghobadian, B; Najafi, G; Hosseinzadeh, B; Mamat, R; Hosseinzadeh, J
2015-09-01
Biodiesel is a green (clean), renewable energy source and is an alternative for diesel fuel. Biodiesel can be produced from vegetable oil, animal fat and waste cooking oil or fat. Fats and oils react with alcohol to produce methyl ester, which is generally known as biodiesel. Because vegetable oil and animal fat wastes are cheaper, the tendency to produce biodiesel from these materials is increasing. In this research, the effect of some parameters such as the alcohol-to-oil molar ratio (4:1, 6:1, 8:1), the catalyst concentration (0.75%, 1% and 1.25% w/w) and the time for the transesterification reaction using ultrasonication on the rate of the fatty acids-to-methyl ester (biodiesel) conversion percentage have been studied (3, 6 and 9 min). In biodiesel production from chicken fat, when increasing the catalyst concentration up to 1%, the oil-to-biodiesel conversion percentage was first increased and then decreased. Upon increasing the molar ratio from 4:1 to 6:1 and then to 8:1, the oil-to-biodiesel conversion percentage increased by 21.9% and then 22.8%, respectively. The optimal point is determined by response surface methodology (RSM) and genetic algorithms (GAs). The biodiesel production from chicken fat by ultrasonic waves with a 1% w/w catalyst percentage, 7:1 alcohol-to-oil molar ratio and 9 min reaction time was equal to 94.8%. For biodiesel that was produced by ultrasonic waves under a similar conversion percentage condition compared to the conventional method, the reaction time was decreased by approximately 87.5%. The time reduction for the ultrasonic method compared to the conventional method makes the ultrasonic method superior. Copyright © 2015. Published by Elsevier B.V.
A Comparative Study of Two Acoustic Measures of Hypernasality
ERIC Educational Resources Information Center
Vogel, Adam P.; Ibrahim, Hasherah M.; Reilly, Sheena; Kilpatrick, Nicky
2009-01-01
Purpose: This study aimed to compare 2 quantitative acoustic measures of nasality in children with cleft lip and palate (CLP) and healthy controls using formalized perceptual assessment as a guide. Method: Fifty participants (23 children with CLP and 27 age- and gender-matched healthy controls) aged between 4 and 12 years produced a variety of…
Comparing fire spread algorithms using equivalence testing and neutral landscape models
Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson
2009-01-01
We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...
A Comparative Analysis of Numbers and Biology Content Domains between Turkey and the USA
ERIC Educational Resources Information Center
Incikabi, Lutfi; Ozgelen, Sinan; Tjoe, Hartono
2012-01-01
This study aimed to compare Mathematics and Science programs focusing on TIMSS content domains of Numbers and Biology that produced the largest achievement gap among students from Turkey and the USA. Specifically, it utilized the content analysis method within Turkish and New York State (NYS) frameworks. The procedures of study included matching…
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
An Evaluation of a New Method of IRT Scaling
ERIC Educational Resources Information Center
Ragland, Shelley
2010-01-01
In order to be able to fairly compare scores derived from different forms of the same test within the Item Response Theory framework, all individual item parameters must be on the same scale. A new approach, the RPA method, which is based on transformations of predicted score distributions was evaluated here and was shown to produce results…
Maritime Search and Rescue via Multiple Coordinated UAS
2017-06-12
performed by a set of UAS. Our investigation covers the detection of multiple mobile objects by a heterogeneous collection of UAS. Three methods (two...account for contingencies such as airspace deconfliction. Results are produced using simulation to verify the capability of the proposed method and to...compare the various par- titioning methods . Results from this simulation show that great gains in search efficiency can be made when the search space is
RobOKoD: microbial strain design for (over)production of target compounds.
Stanford, Natalie J; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design.
RobOKoD: microbial strain design for (over)production of target compounds
Stanford, Natalie J.; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design. PMID:25853130
A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Smith, Gregory L.
1989-01-01
A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.
Peng, Linda X; Wallace, Morgan; Andaloro, Bridget; Fallon, Dawn; Fleck, Lois; Delduco, Dan; Tice, George
2011-01-01
The BAX System PCR assay for Salmonella detection in foods was previously validated as AOAC Research Institute (RI) Performance Tested Method (PTM) 100201. New studies were conducted on beef and produce using the same media and protocol currently approved for the BAX System PCR assay for E. coli O157:H7 multiplex (MP). Additionally, soy protein isolate was tested for matrix extension using the U.S. Food and Drug Administration-Bacteriological Analytical Manual (FDA-BAM) enrichment protocols. The studies compared the BAX System method to the U.S. Department of Agriculture culture method for detecting Salmonella in beef and the FDA-BAM culture method for detecting Salmonella in produce and soy protein isolate. Method comparison studies on low-level inoculates showed that the BAX System assay for Salmonella performed as well as or better than the reference method for detecting Salmonella in beef and produce in 8-24 h enrichment when the BAX System E. coli O157:H7 MP media was used, and soy protein isolate in 20 h enrichment with lactose broth followed by 3 h regrowth in brain heart infusion broth. An inclusivity panel of 104 Salmonella strains with diverse serotypes was tested by the BAX System using the proprietary BAX System media and returned all positive results. Ruggedness factors involved in the enrichment phase were also evaluated by testing outside the specified parameters, and none of the factors examined affected the performance of the assay.
Nonlinear PET parametric image reconstruction with MRI information using kernel method
NASA Astrophysics Data System (ADS)
Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2017-03-01
Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
Schulze, Katja; Lang, Imke; Enke, Heike; Grohme, Diana; Frohme, Marcus
2015-04-17
Ethanol production via genetically engineered cyanobacteria is a promising solution for the production of biofuels. Through the introduction of a pyruvate decarboxylase and alcohol dehydrogenase direct ethanol production becomes possible within the cells. However, during cultivation genetic instability can lead to mutations and thus loss of ethanol production. Cells then revert back to the wild type phenotype. A method for a rapid and simple detection of these non-producing revertant cells in an ethanol producing cell population is an important quality control measure in order to predict genetic stability and the longevity of a producing culture. Several comparable cultivation experiments revealed a difference in the pigmentation for non-producing and producing cells: the accessory pigment phycocyanin (PC) is reduced in case of the ethanol producer, resulting in a yellowish appearance of the culture. Microarray and western blot studies of Synechocystis sp. PCC6803 and Synechococcus sp. PCC7002 confirmed this PC reduction on the level of RNA and protein. Based on these findings we developed a method for fluorescence microscopy in order to distinguish producing and non-producing cells with respect to their pigmentation phenotype. By applying a specific filter set the emitted fluorescence of a producer cell with a reduced PC content appeared orange. The emitted fluorescence of a non-producing cell with a wt pigmentation phenotype was detected in red, and dead cells in green. In an automated process multiple images of each sample were taken and analyzed with a plugin for the image analysis software ImageJ to identify dead (green), non-producing (red) and producing (orange) cells. The results of the presented validation experiments revealed a good identification with 98 % red cells in the wt sample and 90 % orange cells in the producer sample. The detected wt pigmentation phenotype (red cells) in the producer sample were either not fully induced yet (in 48 h induced cultures) or already reverted to a non-producing cells (in long-term photobioreactor cultivations), emphasizing the sensitivity and resolution of the method. The fluorescence microscopy method displays a useful technique for a rapid detection of non-producing single cells in an ethanol producing cell population.
Sommerlot, Andrew R; Pouyan Nejadhashemi, A; Woznicki, Sean A; Prohaska, Michael D
2013-10-15
Non-point source pollution from agricultural lands is a significant contributor of sediment pollution in United States lakes and streams. Therefore, quantifying the impact of individual field management strategies at the watershed-scale provides valuable information to watershed managers and conservation agencies to enhance decision-making. In this study, four methods employing some of the most cited models in field and watershed scale analysis were compared to find a practical yet accurate method for evaluating field management strategies at the watershed outlet. The models used in this study including field-scale model (the Revised Universal Soil Loss Equation 2 - RUSLE2), spatially explicit overland sediment delivery models (SEDMOD), and a watershed-scale model (Soil and Water Assessment Tool - SWAT). These models were used to develop four modeling strategies (methods) for the River Raisin watershed: Method 1) predefined field-scale subbasin and reach layers were used in SWAT model; Method 2) subbasin-scale sediment delivery ratio was employed; Method 3) results obtained from the field-scale RUSLE2 model were incorporated as point source inputs to the SWAT watershed model; and Method 4) a hybrid solution combining analyses from the RUSLE2, SEDMOD, and SWAT models. Method 4 was selected as the most accurate among the studied methods. In addition, the effectiveness of six best management practices (BMPs) in terms of the water quality improvement and associated cost were assessed. Economic analysis was performed using Method 4, and producer requested prices for BMPs were compared with prices defined by the Environmental Quality Incentives Program (EQIP). On a per unit area basis, producers requested higher prices than EQIP in four out of six BMP categories. Meanwhile, the true cost of sediment reduction at the field and watershed scales was greater than EQIP in five of six BMP categories according to producer requested prices. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cetinkaya, Zafer; Altindiş, Mustafa; Aktepe, Orhan Cem; Karabiçak, Nilgün
2003-10-01
The aim of this study was to compare the different methods for the identification of Candida strains isolated from clinical specimens. The methods of germ tube examination, chlamydospore examination formed on the rice Tween-80 (RT-80) agar and evaluation of colony morphologies on the two chromogenic agars (CHROMagar Candida, Albicans ID), were compared with a reference API 20C AUX (bioMerieux, France) automated system based on the carbohydrate assimilation, for the identification of a total 255 Candida isolates. Of them, 173 (67.8%) were identified as C. albicans, 37 (14.5%) were C. glabrata, 23 (9%) were C. krusei, 9 (3.5%) were C. tropicalis, 9 (3.5%) were C. kefyr, 2 (0.8%) were C. guillermondii and 2 (0.8%) were C. parapsilosis, by API 20C AUX system. In the view of these results, 146 (84.4%) of C. albicans strains were identified by germ tube examination, 161 (93.1%) of C. albicans strains and 208 (81.5%) of total strains were identified by chlamydospore examination. 169 (97.7%) of C. albicans strains and 231 (90.6%) of total strains were identified by CHROMagar Candida method, and 168 (97.1%) of C. albicans strains were identified by Albicans ID method, correctly. In the CHROMagar Candida medium, 169 C. albicans isolates have produced bright green colored colonies, whereas 33 (89.2%) isolates which produced dark pink/purple colored colonies were identified as C. glabrata, 7 (77.8%) isolates which produced metalical blue colored colonies were identified as C. tropicalis and 22 (95.6%) isolates which produced pale pink colored colonies were identified as C. krusei. In the Albicans ID medium, four of the 172 isolates which were evaluated as C. albicans initially by producing blue colored colonies, have been identified as C. tropicalis by API 20C AUX system. The sensitivities and specificities of germ tube examination, RT-80, CHROMagar Candida and Albicans ID methods were found as follows, respectively; 84.4% and 100%, 93.1% and 100%, 97.7% and 100%, 99.4% and 95.3 percent. In conclusion, CHROMagar Candida medium seems the most favorable rapid and practical method with high sensitivity and specificity for the identification of Candida species, but its cost-effectiveness should be kept in view.
Lim, Hyeong Jun; Lee, Kunsil; Cho, Young Shik; Kim, Yern Seung; Kim, Taehoon; Park, Chong Rae
2014-09-07
The Hansen solubility parameters (HSPs) of as-produced multi-walled carbon nanotubes (APMWCNTs) were determined by means of the inverse gas chromatography (IGC) technique. Due to non-homogeneous surfaces of the APMWCNTs arising from defects and impurities, it was necessary to establish adequate working conditions for determining the HSPs of the CNTs. We then obtained the HSPs of the APMWCNTs and compared these results with earlier reports as determined by using sedimentation and molecular dynamics simulation methods. It was found that the determination of the HSPs of the CNTs by IGC can give an enhanced determination range based on the adsorption thermodynamic parameters, compared to the HSPs determined using sedimentation methods. And the HSPs of the APMWCNTs, determined here, provided good guidelines for the selection of feasible solvents that can improve the dispersion of the APMWCNTs.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Ihwah, A.; Deoranto, P.; Wijana, S.; Dewi, I. A.
2018-03-01
The part of Areca Palm (Areca catechu) that economical is the seed. It is commercially available in dried, cured and fresh forms, while the fibre is usually thrown away. Cellulose fibers from agricultural waste can be utilized as raw material for handicraft paper. Laboratory research showed that Areca palm fibre contained 70.2% of cellulose, 10.92% of water, and 6.02% of ash. This indicated that Areca palm fibre is very potential to be processed as handicraft paper. Handicraft paper is made of wastepaper or plants which cointain celluloce to produce rough-textured paper. In order to obtain preferred sensory quality of handicraft paper such as color, fiber appearance and texture as well as good physical quantity such as tensile strength, tear resistance and grammage, the addition of wastepaper to provide secondary fibre and sometimes adhesive are needed in making handicraft paper. Handicraft paper making was one alternative to treat the solid waste and to reduce the use of wood fiber as paper raw material. The aim of this study is to compare the two most famous method, i.e. Federer and Gomez Method, for calculate the number of replications. This study is preliminary research before do the research in order to get the best treatment to produce handicraft paper. The Gomez method calculates fewer replications than the Federer method. Based on data simulation the error generated using 3 replicates of 0.0876 while using 2 replicates of 0.1032.
Tags, wireless communication systems, tag communication methods, and wireless communications methods
Scott,; Jeff W. , Pratt; Richard, M [Richland, WA
2006-09-12
Tags, wireless communication systems, tag communication methods, and wireless communications methods are described. In one aspect, a tag includes a plurality of antennas configured to receive a plurality of first wireless communication signals comprising data from a reader, a plurality of rectifying circuits coupled with. respective individual ones of the antennas and configured to provide rectified signals corresponding to the first wireless communication signals, wherein the rectified signals are combined to produce a composite signal, an adaptive reference circuit configured to vary a reference signal responsive to the composite signal, a comparator coupled with the adaptive reference circuit and the rectifying circuits and configured to compare the composite signal with respect to the reference signal and to output the data responsive to the comparison, and processing circuitry configured to receive the data from the comparator and to process the data.
Quantitative Technique for Comparing Simulant Materials through Figures of Merit
NASA Technical Reports Server (NTRS)
Rickman, Doug; Hoelzer, Hans; Fourroux, Kathy; Owens, Charles; McLemore, Carole; Fikes, John
2007-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication both identified and reinforced a need for a set of standards and requirements for the production and usage of the Lunar simulant materials. As NASA prepares to return to the Moon, and set out to Mars, a set of early requirements have been developed for simulant materials and the initial methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum characteristics for simulants of Lunar regolith, and 3) a method to produce simulants needed for NASA's Exploration mission. As an extension of the requirements document a method to evaluate new and current simulants has been rigorously defined through the mathematics of Figures of Merit (FoM). Requirements and techniques have been developed that allow the simulant provider to compare their product to a standard reference material through Figures of Merit. Standard reference material may be physical material such as the Apollo core samples or material properties predicted for any landing site. The simulant provider is not restricted to providing a single "high fidelity" simulant, which may be costly to produce. The provider can now develop "lower fidelity" simulants for engineering applications such as drilling and mobility applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less
Robust power spectral estimation for EEG data
Melman, Tamar; Victor, Jonathan D.
2016-01-01
Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041
Genetic diversity of Bacillus sp producers of amylase isolated from the soil.
Xavier, A R E O; Lima, E R; Oliveira, A M E; Cardoso, L; Santos, J; Cangussu, C H C; Leite, L N; Quirino, M C L; Júnior, I G C; Oliveira, D A; Xavier, M A S
2017-09-27
The microorganisms are the best source of extracellular enzymes since they allow an economical technology with low-resource consumption compared to animals and plants. The amylases are among the most important enzymes being the genus Bacillus one of the most investigated due to its ability to produce this enzyme. The objective of this study was to isolate and analyze the genetic diversity among bacteria of the genus Bacillus sp producer of amylase originated from the soil. To this end, soil samples were collected and submitted to the condition of extreme temperature. The serial dilution procedure followed by seeding on solid medium containing starch was used for isolation of strains that produce amylase. The microorganisms isolated were subjected to standard morphological methods for presumptive identification of the genus Bacillus. The PCR assay with the universal genetic marker 16S rDNA was used for confirmation of bacterial strain. All the 10 isolates presumptively identified as bacteria amplified a fragment of 370 bp corresponding to the 16S rDNA gene. The enzymatic activity was expressed as an enzymatic index (EI), after 24 h of incubation. All isolate producers of amylase exhibited EI ≥ 2.0. The determination of the genetic profile and the clonal relationship among the isolates were performed by the method of ERIC-PCR polymorphism. The isolates of Bacillus spp were divided into 2 groups (I and II). Through this method, the discriminatory capacity of this analysis of polymorphisms was verified in differing producer strains from those not producing amylase.
Critical Evaluation of Soil Pore Water Extraction Methods on a Natural Soil
NASA Astrophysics Data System (ADS)
Orlowski, Natalie; Pratt, Dyan; Breuer, Lutz; McDonnell, Jeffrey
2017-04-01
Soil pore water extraction is an important component in ecohydrological studies for the measurement of δ2H and δ18O. The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of commonly applied lab-based soil water extraction techniques on a natural soil: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and two types of cryogenic extraction systems. We applied these extraction methods to a natural summer-dry (gravimetric water contents ranging from 8% to 15%) glacio-lacustrine, moderately fine textured clayey soil; excavated in 10 cm sampling increments to a depth of 1 meter. Isotope results were analyzed via OA-ICOS and compared for each extraction technique that produced liquid water. From our previous intercomparison study among the same extraction techniques but with standard soils, we discovered that extraction methods are not comparable. We therefore tested the null hypothesis that all extraction techniques would be able to replicate the natural evaporation front in a comparable manner occurring in a summer-dry soil. Our results showed that the extraction technique utilized had a significant effect on the soil water isotopic composition. High pressure mechanical squeezing and vapor equilibration techniques produced similar results with similarly sloped evaporation lines. Due to the nature of soil properties and dryness, centrifugation was unsuccessful in obtaining pore water for isotopic analysis. Cryogenic extraction on both tested techniques produced similar results to each other on a similar sloping evaporation line, but dissimilar with depth.
Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A
2017-11-07
Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
An analytical study for the design of advanced rotor airfoils
NASA Technical Reports Server (NTRS)
Kemp, L. D.
1973-01-01
A theoretical study has been conducted to design and evaluate two airfoils for helicopter rotors. The best basic shape, designed with a transonic hodograph design method, was modified to meet subsonic criteria. One airfoil had an additional constraint for low pitching-moment at the transonic design point. Airfoil characteristics were predicted. Results of a comparative analysis of helicopter performance indicate that the new airfoils will produce reduced rotor power requirements compared to the NACA 0012. The hodograph design method, written in CDC Algol, is listed and described.
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Ogg, T W; Jennings, R A; Morrison, C G
1983-11-01
An investigation was undertaken to assess the use of a total intravenous anaesthetic technique of fentanyl and methohexitone for outpatient vaginal termination of pregnancy. When compared with a technique of fentanyl, methohexitone, nitrous oxide and trichloroethylene the total intravenous method caused swifter recovery, minimal side-effects and no cardiovascular depression. However, both anaesthetic techniques produced significant postoperative reduction of memory for new facts when compared with a control group receiving no general anaesthesia. There is a need to continue the search for anaesthetic methods appropriate for day cases.
Capturing User Reading Behaviors for Personalized Document Summarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Jiang, Hao; Lau, Francis
2011-01-01
We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less
The Size of Gelatin Sponge Particles: Differences with Preparation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsumori, Tetsuya, E-mail: katsumo@eurus.dti.ne.jp; Kasahara, Toshiyuki
2006-12-15
Purpose. To assess whether the size distribution of gelatin sponge particles differed according to the method used to make them and the type of original sheet. Methods. Gelatin sponge particles of approximately 1-1.5 x 1-1.5 x 2 mm were made from either Spongel or Gelfoam sheets by cutting with a scalpel and scissors. Particles were also made of either Spongel or Gelfoam sheets by pumping with two syringes and a three-way stopcock. The size distribution of the particles in saline was compared among the groups. Results. (1) Cutting versus pumping: When Spongel was used, cutting produced lower rates of smallermore » particles {<=}500 {mu}m and larger particles >2000 {mu}m compared with pumping back and forth 30 times (1.1% vs 37.6%, p < 0.0001; 2.2% vs 14.4%, p = 0.008). When Gelfoam was used, cutting produced lower rates of smaller and larger particles compared with pumping (8.5% vs 20.4%, p = 0.1809; 0% vs 48.1%, p < 0.0001). (2) Spongel versus Gelfoam: There was no significant difference in the size distribution of the particles between Spongel and Gelfoam (p = 0.2002) when cutting was used. Conclusion. The size distribution of gelatin sponge particles differed according to the method used to make them. More uniform particle sizes can be achieved by cutting than by pumping.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian
2016-03-01
In our previous reports we demonstrated a novel Fourier domain optical coherence tomography method, Master Slave optical coherence tomography (MS-OCT), that does not require resampling of data and can deliver en-face images from several depths simultaneously. While ideally suited for delivering information from a selected depth, the MS-OCT has been so far inferior to the conventional FFT based OCT in terms of time of producing cross section images. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real-time by assembling several T-scans from different depths. We analyze the conditions that ensure a real-time B-scan imaging operation, and demonstrate in-vivo real-time images from human fovea and the optic nerve, of comparable resolution and sensitivity to those produced using the traditional Fourier domain based method.
ERIC Educational Resources Information Center
Sundara, Megha; Demuth, Katherine; Kuhl, Patricia K.
2011-01-01
Purpose: Two-year-olds produce third person singular "-s" more accurately on verbs in sentence-final position as compared with verbs in sentence-medial position. This study was designed to determine whether these sentence-position effects can be explained by perceptual factors. Method: For this purpose, the authors compared 22- and 27-month-olds'…
An Introduction to Photomicrography.
ERIC Educational Resources Information Center
Judson, Peter
1979-01-01
Described are various methods for producing black and white photographs of microscope slides using single lens reflex, fixed lens, and plate cameras. Procedures for illumination, film processing, mounting, and projection are also discussed. A table of comparative film speeds is included. (CS)
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Testing and Validation of Computational Methods for Mass Spectrometry.
Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas
2016-03-04
High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.
USSR and Eastern Europe Scientific Abstracts, Materials Science and Metallurgy, Number 45
1977-05-11
constants VQ and q. The values of the critical stress intensity factor produced by the authors by their indirect method are compared with...and TEREKHOV, A. N., Moscow Institute of Steel and Alloys [Russian abstract provided by the source] [Text] The method of high-temperature...their melting point. References 9; all Russian. USSR ’ UDC 539 IMPROVING THE PRECISION OF THE ACOUSTIC METHOD OF STRESS DETERMINATION Kiev
Methods of Functionalization of Carbon Nanotubes by Photooxidation
NASA Technical Reports Server (NTRS)
Lebron-Colon, Marisabel (Inventor); Meador, Michael A. (Inventor)
2016-01-01
A method of photooxidizing carbon nanotubes, such as single-walled and multi-walled carbon nanotubes. The nanotubes are purified and dispersed in a solvent, such as n-methyl pyrrolidinone or dimethylformamide. A singlet oxygen sensitizer like Rose Bengal is added to the solution. Oxygen gas is continuously supplied while irradiating the solution while irradiating the solution with ultraviolet light to produce singlet oxygen to oxidize the single-walled carbon nanotubes. Advantageously, the method significantly increases the level of oxidation compared with prior art methods.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
Method for reduction of selected ion intensities in confined ion beams
Eiden, Gregory C.; Barinaga, Charles J.; Koppenaal, David W.
1998-01-01
A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer.
Method for reduction of selected ion intensities in confined ion beams
Eiden, G.C.; Barinaga, C.J.; Koppenaal, D.W.
1998-06-16
A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer. 7 figs.
Specific Method for the Determination of Ozone in the Atmosphere.
ERIC Educational Resources Information Center
Sachdev, Sham L.; And Others
A description is given of work undertaken to develop a simple, specific, and reliable method for ozone. Reactions of ozone with several 1-alkenes were studied at room temperature (25C). Eugenol (4-allyl-2-methoxy phenol), when reacted with ozone, was found to produce relatively large amounts of formaldehyde as compared to other 1-alkenes tested.…
Elzanfaly, Eman S; Hegazy, Maha A; Saad, Samah S; Salem, Maissa Y; Abd El Fattah, Laila E
2015-03-01
The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high-performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8-chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high-performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time- and solvent-saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Method and apparatus for detecting concealed weapons
Kotter, Dale K.; Fluck, Frederick D.
2006-03-14
Apparatus for classifying a ferromagnetic object within a sensing area may include a magnetic field sensor that produces magnetic field data. A signal processing system operatively associated with the magnetic field sensor includes a neural network. The neural network compares the magnetic field data with magnetic field data produced by known ferromagnetic objects to make a probabilistic determination as to the classification of the ferromagnetic object within the sensing area. A user interface operatively associated with the signal processing system produces a user-discernable output indicative of the probabilistic determination of the classification of the ferromagnetic object within a sensing area.
Producing recombinant human milk proteins in the milk of livestock species.
Bösze, Zsuzsanna; Baranyi, Mária; Whitelaw, C Bruce A
2008-01-01
Recombinant human proteins produced by the mammary glands of genetically modified transgenic livestock mammals represent a special aspect of milk bioactive components. For therapeutic applications, the often complex posttranslational modifications of human proteins should be recapitulated in the recombinant products. Compared to alternative production methods, mammary gland production is a viable option, underlined by a number of transgenic livestock animal models producing abundant biologically active foreign proteins in their milk. Recombinant proteins isolated from milk have reached different phases of clinical trials, with the first marketing approval for human therapeutic applications from the EMEA achieved in 2006.
In situ heat treatment of a tar sands formation after drive process treatment
Vinegar, Harold J.; Stanecki, John
2010-09-21
A method for treating a tar sands formation includes providing a drive fluid to a hydrocarbon containing layer of the tar sands formation to mobilize at least some hydrocarbons in the layer. At least some first hydrocarbons from the layer are produced. Heat is provided to the layer from one or more heaters located in the formation. At least some second hydrocarbons are produced from the layer of the formation. The second hydrocarbons include at least some hydrocarbons that are upgraded compared to the first hydrocarbons produced by using the drive fluid.
de Fabritus, Lauriane; Nougairède, Antoine; Aubry, Fabien; Gould, Ernest A; de Lamballerie, Xavier
2016-01-01
Large-scale codon re-encoding is a new method of attenuating RNA viruses. However, the use of infectious clones to generate attenuated viruses has inherent technical problems. We previously developed a bacterium-free reverse genetics protocol, designated ISA, and now combined it with large-scale random codon-re-encoding method to produce attenuated tick-borne encephalitis virus (TBEV), a pathogenic flavivirus which causes febrile illness and encephalitis in humans. We produced wild-type (WT) and two re-encoded TBEVs, containing 273 or 273+284 synonymous mutations in the NS5 and NS5+NS3 coding regions respectively. Both re-encoded viruses were attenuated when compared with WT virus using a laboratory mouse model and the relative level of attenuation increased with the degree of re-encoding. Moreover, all infected animals produced neutralizing antibodies. This novel, rapid and efficient approach to engineering attenuated viruses could potentially expedite the development of safe and effective new-generation live attenuated vaccines.
Jonasson, P; Bagge, U; Wieslander, A; Braide, M
1996-01-01
Data from cell culture experiments indicate that heat sterilization of peritoneal dialysis (PD) fluids produces cytotoxic glucose degradation products. The present vital microscopic study investigated the effects of different sterilization methods on the biocompatibility of PD fluids. Thus, heat-sterilized (commercially obtained and experimentally produced) and filter-sterilized PD fluids (pH = 5.30-5.40; 1.5% glucose) were compared with Tyrode buffer, with respect to the effects on microvascular blood flow velocity and leukocyte adhesion in the rat mesentery. Exteriorization of the mesentery produced a mild inflammation, known from the literature and characterized by the adhesive rolling of leukocytes along venular walls. Superfusion of the mesentery with filter-sterilized PD fluid had no significant effects on leukocyte rolling or flow velocity in venules 25-40 microns in diameter compared with buffer superfusion. Heat-sterilized PD fluid decreased the concentration of rolling leukocytes and increased flow velocity significantly, as compared with buffer and filter-sterilized PD fluid. The results indicate that heat sterilization of PD fluids produces substances that interact with microvascular tone and leukocyte-endothelial adhesion, which hypothetically could impair the acute, granulocyte-mediated defense against bacterial infections.
Comparison of Methods for Estimating Evapotranspiration using Remote Sensing Data
NASA Astrophysics Data System (ADS)
Beamer, J. P.; Morton, C.; Huntington, J. L.; Pohll, G.
2010-12-01
Estimating the annual evapotranspiration (ET) in arid and semi-arid environments is important for managing water resources. In this study we use remote sensing methods to estimate ET from different areas located in western and eastern Nevada. Surface energy balance (SEB) and vegetation indices (VI) are two common methods for estimating ET using satellite data. The purpose of this study is to compare these methods for estimating annual ET and highlight strengths and weaknesses in both methods. The SEB approach used is based on the Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model, which estimates ET as a residual of the energy balance. METRIC has been shown to produce accurate results in agricultural and riparian settings. The VI approach used is based on statistical relationships between annual ET and various VI’s. The VI approaches have also shown to produce fairly accurate estimates of ET for various vegetation types, however consideration for spatial variations in potential ET and precipitation amount are generally ignored, leading to restrictions in their application. In this work we develop a VI approach that considers the study area potential ET and precipitation amount and compare this approach to METRIC and flux tower estimates of annual ET for several arid phreatophyte shrubs and irrigated agriculture settings.
Analysis Resistant Cipher Method and Apparatus
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2009-01-01
A system for encoding and decoding data words including an anti-analysis encoder unit for receiving an original plaintext and producing a recoded data, a data compression unit for receiving the recoded data and producing a compressed recoded data, and an encryption unit for receiving the compressed recoded data and producing an encrypted data. The recoded data has an increased non-correlatable data redundancy compared with the original plaintext in order to mask the statistical distribution of characters in the plaintext data. The system of the present invention further includes a decryption unit for receiving the encrypted data and producing a decrypted data, a data decompression unit for receiving the decrypted data and producing an uncompressed recoded data, and an anti-analysis decoder unit for receiving the uncompressed recoded data and producing a recovered plaintext that corresponds with the original plaintext.
Boser, Quinn A; Valevicius, Aïda M; Lavoie, Ewen B; Chapman, Craig S; Pilarski, Patrick M; Hebert, Jacqueline S; Vette, Albert H
2018-04-27
Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.
Proposed hybrid-classifier ensemble algorithm to map snow cover area
NASA Astrophysics Data System (ADS)
Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir
2018-01-01
Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.
Demas, Vasiliki; Bernhardt, Anthony; Malba, Vince; Adams, Kristl L; Evans, Lee; Harvey, Christopher; Maxwell, Robert S; Herberg, Julie L
2009-09-01
Nuclear magnetic resonance (NMR) offers a non-destructive, powerful, structure-specific analytical method for the identification of chemical and biological systems. The use of radio frequency (RF) microcoils has been shown to increase the sensitivity in mass-limited samples. Recent advances in micro-receiver technology have further demonstrated a substantial increase in mass sensitivity [D.L. Olson, T.L. Peck, A.G. Webb, R.L. Magin, J.V. Sweedler, High-resolution microcoil H-1-NMR for mass-limited, nanoliter-volume samples, Science 270 (5244) (1995) 1967-1970]. Lithographic methods for producing solenoid microcoils possess a level of flexibility and reproducibility that exceeds previous production methods, such as hand winding microcoils. This paper presents electrical characterizations of RF microcoils produced by a unique laser lithography system that can pattern three dimensional surfaces and compares calculated and experimental results to those for wire wound RF microcoils. We show that existing optimization conditions for RF coil design still hold true for RF microcoils produced by lithography. Current lithographic microcoils show somewhat inferior performance to wire wound RF microcoils due to limitations in the existing electroplating technique. In principle, however, when the pitch of the RF microcoil is less than 100mum lithographic coils should show comparable performance to wire wound coils. In the cases of larger pitch, wire cross sections can be significantly larger and resistances lower than microfabricated conductors.
Evaluation of direct saponification method for determination of cholesterol in meats.
Adams, M L; Sullivan, D M; Smith, R L; Richter, E F
1986-01-01
A gas chromatographic (GC) method has been developed for determination of cholesterol in meats. The method involves ethanolic KOH saponification of the sample material, homogeneous-phase toluene extraction of the unsaponifiables, derivatization of cholesterol to its trimethylsilylether, and quantitation by GC-flame ionization detection using 5-alpha-cholestane as internal standard. This direct saponification method is compared with the current AOAC official method for determination of cholesterol in 20 different meat products. The direct saponification method eliminates the need for initial lipid extraction, thus offering a 30% savings in labor, and requires fewer solvents than the AOAC method. It produced comparable or slightly higher cholesterol results than the AOAC method in all meat samples examined. Precision, determined by assaying a turkey meat sample 16 times over 4 days, was excellent (CV = 1.74%). Average recovery of cholesterol added to meat samples was 99.8%.
Solving a real-world problem using an evolving heuristically driven schedule builder.
Hart, E; Ross, P; Nelson, J
1998-01-01
This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.
Production of High-Purity Anhydrous Nickel(II) Perrhenate for Tungsten-Based Sintered Heavy Alloys
Leszczyńska-Sejda, Katarzyna; Benke, Grzegorz; Kopyto, Dorota; Majewski, Tomasz; Drzazga, Michał
2017-01-01
This paper presents a method for the production of high-purity anhydrous nickel(II) perrhenate. The method comprises sorption of nickel(II) ions from aqueous nickel(II) nitrate solutions, using strongly acidic C160 cation exchange resin, and subsequent elution of sorbed nickel(II) ions using concentrated perrhenic acid solutions. After the neutralization of the resulting rhenium-nickel solutions, hydrated nickel(II) perrhenate is then separated and then dried at 160 °C to obtain the anhydrous form. The resulting compound is reduced in an atmosphere of dissociated ammonia in order to produce a Re-Ni alloy powder. This study provides information on the selected properties of the resulting Re-Ni powder. This powder was used as a starting material for the production of 77W-20Re-3Ni heavy alloys. Microstructure examination results and selected properties of the produced sintered heavy alloys were compared to sintered alloys produced using elemental W, Re, and Ni powders. This study showed that the application of anhydrous nickel(II) perrhenate in the production of 77W-20Re-3Ni results in better properties of the sintered alloys compared to those made from elemental powders. PMID:28772808
Global estimates of country health indicators: useful, unnecessary, inevitable?
AbouZahr, Carla; Boerma, Ties; Hogan, Daniel
2017-01-01
ABSTRACT Background: The MDG era relied on global health estimates to fill data gaps and ensure temporal and cross-country comparability in reporting progress. Monitoring the Sustainable Development Goals will present new challenges, requiring enhanced capacities to generate, analyse, interpret and use country produced data. Objective: To summarize the development of global health estimates and discuss their utility and limitations from global and country perspectives. Design: Descriptive paper based on findings of intercountry workshops, reviews of literatureon and synthesis of experiences. Results: Producers of global health estimates focus on the technical soundness of estimation methods and comparability of the results across countries and over time. By contrast, country users are more concerned about the extent of their involvement in the estimation process and hesitate to buy into estimates derived using methods their technical staff cannot explain and that differ from national data sources. Quantitative summaries of uncertainty may be of limited practical use in policy discussions where decisions need to be made about what to do next. Conclusions: Greater transparency and involvement of country partners in the development of global estimates will help improve ownership, strengthen country capacities for data production and use, and reduce reliance on externally produced estimates. PMID:28532307
Infrared coagulation: a new treatment for hemorrhoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leicester, R.J.; Nicholls, R.J.; Mann, C.V.
Many methods, which have effectively reduced the number of patients requiring hospital admission, have been described for the outpatient treatment of hemorrhoids. However, complications have been reported, and the methods are often associated with unpleasant side effects. In 1977 Neiger et al. described a new method that used infrared coagulation, which produced minimal side effects. The authors have conducted a prospective, randomized trial to evaluate infrared coagulation compared with more traditional methods of treatment. The authors' results show that it may be more effective than injection sclerotherapy in treating non-prolapsing hemorrhoids and that it compares favorably with rubber band ligationmore » in most prolapsing hemorrhoids. No complications occurred, and significantly fewer patients experienced pain after infrared coagulation (P . less than 0.001).« less
Lin, Andrew; Nguyen, Lam; Clotilde, Laurie M; Kase, Julie A; Son, Insook; Lauzon, Carol R
2012-11-01
The ability to detect and isolate Shiga toxin-producing Escherichia coli (STEC) remains a major challenge for food microbiologists. Although methods based on nucleic acids and antibodies have improved detection of STECs in foods, isolation of these bacteria remains arduous. STEC isolation is necessary for matching food, environmental, and clinical isolates during outbreak investigations and for distinguishing between pathogenic and nonpathogenic organisms. STEC heart infusion washed blood agar with mitomycin-C (SHIBAM) is a modification of washed sheep blood agar prepared by adding mitomycin-C and optimizing both the washed blood and base agar to better isolate STECs. Most STEC isolates produce a zone of hemolysis on SHIBAM plates and are easily distinguishable from background microbiota. Here, we present data supporting the use of SHIBAM to isolate STECs from fresh produce. SHIBAM was tested for accuracy in identifying STECs (365 of 410 STEC strains were hemolytic, and 63 of 73 E. coli strains that did not produce Shiga toxin were not hemolytic) and for recovery from artificially inoculated fresh produce (11 of 24 romaine lettuce samples and 6 of 24 tomato samples). STEC recovery with SHIBAM agar was greatly improved when compared with recovery on Levine's eosin-methylene blue agar as a reference method.
Comparing four methods to estimate usual intake distributions.
Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P
2011-07-01
The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
Durbin, Gregory W; Salter, Robert
2006-01-01
The Ecolite High Volume Juice (HVJ) presence-absence method for a 10-ml juice sample was compared with the U.S. Food and Drug Administration Bacteriological Analytical Manual most-probable-number (MPN) method for analysis of artificially contaminated orange juices. Samples were added to Ecolite-HVJ medium and incubated at 35 degrees C for 24 to 48 h. Fluorescent blue results were positive for glucuronidase- and galactosidase-producing microorganisms, specifically indicative of about 94% of Escherichia coli strains. Four strains of E. coli were added to juices at concentrations of 0.21 to 6.8 CFU/ ml. Mixtures of enteric bacteria (Enterobacter plus Klebsiella, Citrobacter plus Proteus, or Hafnia plus Citrobacter plus Enterobacter) were added to simulate background flora. Three orange juice types were evaluated (n = 10) with and without the addition of the E. coli strains. Ecolite-HVJ produced 90 of 90 (10 of 10 samples of three juice types, each inoculated with three different E. coli strains) positive (blue-fluorescent) results with artificially contaminated E. coli that had MPN concentrations of <0.3 to 9.3 CFU/ml. Ten of 30 E. coli ATCC 11229 samples with MPN concentrations of <0.3 CFU/ml were identified as positive with Ecolite-HVJ. Isolated colonies recovered from positive Ecolite-HVJ samples were confirmed biochemically as E. coli. Thirty (10 samples each of three juice types) negative (not fluorescent) results were obtained for samples contaminated with only enteric bacteria and for uninoculated control samples. A juice manufacturer evaluated citrus juice production with both the Ecolite-HVJ and Colicomplete methods and recorded identical negative results for 95 20-ml samples and identical positive results for 5 20-ml samples artificially contaminated with E. coli. The Ecolite-HVJ method requires no preenrichment and subsequent transfer steps, which makes it a simple and easy method for use by juice producers.
NASA Astrophysics Data System (ADS)
Putri, D. K. Y.; Kusuma, H. S.; Syahputra, M. E.; Parasandi, D.; Mahfud, M.
2017-12-01
Patchouli plant (Pogostemon cablin Benth) is one of the important essential oil-producing plant, contributes more than 50% of total exports of Indonesia’s essential oil. However, the extraction of patchouli oil that has been done in Indonesia is generally still used conventional methods that require enormous amount of energy, high solvent usage, and long time of extraction. Therefore, in this study, patchouli oil extraction was carried out by using microwave hydrodistillation and solvent-free microwave extraction methods. Based on this research, it is known that the extraction of patchouli oil using microwave hydrodistillation method with longer extraction time (240 min) only produced patchouli oil’s yield 1.2 times greater than solvent-free microwave extraction method which require faster extraction time (120 min). Otherwise the analysis of electric consumption and the environmental impact, the solvent-free microwave extraction method showed a smaller amount when compared with microwave hydrodistillation method. It is conclude that the use of solvent-free microwave extraction method for patchouli oil extraction is suitably method as a new green technique.
Zheng, Lu; Gao, Naiyun; Deng, Yang
2012-01-01
It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
Polymer-phyllosilicate nanocomposites and their preparation
Chaiko, David J.
2007-01-09
Polymer-phyllosilicate nanocomposites that exhibit superior properties compared to the polymer alone, and methods-for producing these polymer-phyllosilicate nanocomposites, are provided. Polymeric surfactant compatabilizers are adsorbed onto the surface of hydrophilic or natural phyllosilicates to facilitate the dispersal and exfoliation of the phyllosilicate in a polymer matrix. Utilizing polymeric glycol based surfactants, polymeric dicarboxylic acids, polymeric diammonium surfactants, and polymeric diamine surfactants as compatabilizers facilitates natural phyllosilicate and hydrophilic organoclay dispersal in a polymer matrix to produce nanocomposites.
Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.
2016-01-01
Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471
Natarajan, A; Molnar, P; Sieverdes, K; Jamshidi, A; Hickman, J J
2006-04-01
The threat of environmental pollution, biological warfare agent dissemination and new diseases in recent decades has increased research into cell-based biosensors. The creation of this class of sensors could specifically aid the detection of toxic chemicals and their effects in the environment, such as pyrethroid pesticides. Pyrethroids are synthetic pesticides that have been used increasingly over the last decade to replace other pesticides like DDT. In this study we used a high-throughput method to detect pyrethroids by using multielectrode extracellular recordings from cardiac cells. The data from this cell-electrode hybrid system was compared to published results obtained with patch-clamp electrophysiology and also used as an alternative method to further understand pyrethroid effects. Our biosensor consisted of a confluent monolayer of cardiac myocytes cultured on microelectrode arrays (MEA) composed of 60 substrate-integrated electrodes. Spontaneous activity of these beating cells produced extracellular field potentials in the range of 100 microV to nearly 1200 microV with a beating frequency of 0.5-4 Hz. All of the tested pyrethroids; alpha-Cypermethrin, Tetramethrin and Tefluthrin, produced similar changes in the electrophysiological properties of the cardiac myocytes, namely reduced beating frequency and amplitude. The sensitivity of our toxin detection method was comparable to earlier patch-clamp studies, which indicates that, in specific applications, high-throughput extracellular methods can replace single-cell studies. Moreover, the similar effect of all three pyrethroids on the measured parameters suggests, that not only detection of the toxins but, their classification might also be possible with this method. Overall our results support the idea that whole cell biosensors might be viable alternatives when compared to current toxin detection methods.
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
Tran, Thi Ha; Nguyen, Viet Tuyen
2014-01-01
Cupric oxide (CuO), having a narrow bandgap of 1.2 eV and a variety of chemophysical properties, is recently attractive in many fields such as energy conversion, optoelectronic devices, and catalyst. Compared with bulk material, the advanced properties of CuO nanostructures have been demonstrated; however, the fact that these materials cannot yet be produced in large scale is an obstacle to realize the potential applications of this material. In this respect, chemical methods seem to be efficient synthesis processes which yield not only large quantities but also high quality and advanced material properties. In this paper, the effect of some general factors on the morphology and properties of CuO nanomaterials prepared by solution methods will be overviewed. In terms of advanced nanostructure synthesis, microwave method in which copper hydroxide nanostructures are produced in the precursor solution and sequentially transformed by microwave into CuO may be considered as a promising method to explore in the near future. This method produces not only large quantities of nanoproducts in a short reaction time of several minutes, but also high quality materials with advanced properties. A brief review on some unique properties and applications of CuO nanostructures will be also presented. PMID:27437488
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet
2014-01-01
Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE.
NASA Technical Reports Server (NTRS)
Lee, Jonathan A.
2005-01-01
High-throughput measurement techniques are reviewed for solid phase transformation from materials produced by combinatorial methods, which are highly efficient concepts to fabricate large variety of material libraries with different compositional gradients on a single wafer. Combinatorial methods hold high potential for reducing the time and costs associated with the development of new materials, as compared to time-consuming and labor-intensive conventional methods that test large batches of material, one- composition at a time. These high-throughput techniques can be automated to rapidly capture and analyze data, using the entire material library on a single wafer, thereby accelerating the pace of materials discovery and knowledge generation for solid phase transformations. The review covers experimental techniques that are applicable to inorganic materials such as shape memory alloys, graded materials, metal hydrides, ferric materials, semiconductors and industrial alloys.
Robust power spectral estimation for EEG data.
Melman, Tamar; Victor, Jonathan D
2016-08-01
Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti
2016-07-01
In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Method for removing RFI from SAR images
Doerry, Armin W.
2003-08-19
A method of removing RFI from a SAR by comparing two SAR images on a pixel by pixel basis and selecting the pixel with the lower magnitude to form a composite image. One SAR image is the conventional image produced by the SAR. The other image is created from phase-history data which has been filtered to have the frequency bands containing the RFI removed.
Galea, R; Wells, R G; Ross, C K; Lockwood, J; Moore, K; Harvey, J T; Isensee, G H
2013-05-07
Recent shortages of molybdenum-99 ((99)Mo) have led to an examination of alternate production methods that could contribute to a more robust supply. An electron accelerator and the photoneutron reaction were used to produce (99)Mo from which technetium-99m ((99m)Tc) is extracted. SPECT images of rat anatomy obtained using the accelerator-produced (99m)Tc with those obtained using (99m)Tc from a commercial generator were compared. Disks of (100)Mo were irradiated with x-rays produced by a 35 MeV electron beam to generate about 1110 MBq (30 mCi) of (99)Mo per disk. After target dissolution, a NorthStar ARSII unit was used to separate the (99m)Tc, which was subsequently used to tag pharmaceuticals suitable for cardiac and bone imaging. SPECT images were acquired for three rats and compared to images for the same three rats obtained using (99m)Tc from a standard reactor (99)Mo generator. The efficiency of (99)Mo-(99m)Tc separation was typically greater than 90%. This study demonstrated the delivery of (99m)Tc from the end of beam to the end user of approximately 30 h. Images obtained using the heart and bone scanning agents using reactor and linac-produced (99m)Tc were comparable. High-power electron accelerators are an attractive option for producing (99)Mo on a national scale.
A Candida guilliermondii lysine hyperproducer capable of elevated citric acid production.
West, Thomas P
2016-05-01
A mutant of the yeast Candida guilliermondii ATCC 9058 exhibiting elevated citric acid production was isolated based upon its ability to overproduce lysine. This method involved the use of a solid medium containing a combination of lysine analogues to identify a mutant that produced a several-fold higher lysine level compared to its parent strain using glucose or glycerol as a carbon source. The mutant strain was also capable of producing more than a fivefold higher citric acid level on glycerol as a carbon source compared to its parent strain. It was concluded that the screening of yeast lysine hyperproducer strains could provide a rapid approach to isolate yeast citric acid hyperproducer strains.
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; ...
2016-11-24
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
HOLA: Human-like Orthogonal Network Layout.
Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael
2016-01-01
Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.
Laser evaporation of the prostate: preliminary findings in canines
NASA Astrophysics Data System (ADS)
Kuntzman, R. S.; Malek, Reza S.; Barrett, David M.; Bostwick, David G.
1996-05-01
Purpose: We evaluated the ability of KTP laser to evaporate prostatic tissue in vivo and compared the results with historical Nd:YAG treated controls. Methods: Five dogs underwent anterograde transurethral evaporation of the prostate (TUEP) with KTP laser at 38 watts and were sacrificed 48 hours after surgery. Results: All procedures were hemostatic and without complications. Laser evaporation produced cavities within the prostate ranging from 2.5 to 3.2 cm in diameter (average equals 2.9 cm) that were free of necrotic tissue. Conclusions: Preliminary findings in this initial canine study of laser evaporation of the prostate, show that KTP laser produces large spherical cavities within the prostate in a hemostatic fashion. These cavities are free of necrotic tissue. In addition, these cavities are comparable in size to those that have been observed 4 to 8 weeks following Nd:YAG VLAP and are significantly larger than the acute cavities produced by Nd:YAG TUEP.
NASA Technical Reports Server (NTRS)
Petersen, Gene R.; Baresi, Larry
1990-01-01
This report provides an overview options for converting lignocellulosics into fermentable sugars in CELSS. A requirement for pretreatment is shown. Physical-chemical and enzymatic hydrolysis processes for producing fermentable sugars are discussed. At present physical-chemical methods are the simplest and best characterized options, but enzymatic processes will be the likely method of choice in the future. The use of pentose sugars by microorganisms to produce edibles is possible. The use of mycelial food production on pretreated but not hydrolyzed lignocellulosics is also possible. Simple trade-off analyses to regenerate waste lignocellulosics for two pathways are made, one of which is compared to complete oxidation.
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
2017-10-29
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Wan Ismail, W Z; Sim, K S; Tso, C P; Ting, H Y
2011-01-01
To reduce undesirable charging effects in scanning electron microscope images, Rayleigh contrast stretching is developed and employed. First, re-scaling is performed on the input image histograms with Rayleigh algorithm. Then, contrast stretching or contrast adjustment is implemented to improve the images while reducing the contrast charging artifacts. This technique has been compared to some existing histogram equalization (HE) extension techniques: recursive sub-image HE, contrast stretching dynamic HE, multipeak HE and recursive mean separate HE. Other post processing methods, such as wavelet approach, spatial filtering, and exponential contrast stretching, are compared as well. Overall, the proposed method produces better image compensation in reducing charging artifacts. Copyright © 2011 Wiley Periodicals, Inc.
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-07-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Steiger, D B Meyer; Ritchie, S A; Laurance, S G W
2014-01-01
Emerging infectious diseases are on the rise with future outbreaks predicted to occur in frontier regions of tropical countries. Disease surveillance in these hotspots is challenging because sampling techniques often rely on vector attractants that are either unavailable in remote localities or difficult to transport. We examined whether a novel method for producing CO2 from yeast and sugar produces similar mosquito species captures compared with a standard attractant such as dry ice. Across three different vegetation communities, we found traps baited with dry ice frequently captured more mosquitoes than yeast-baited traps; however, there was little effect on mosquito community composition. Based on our preliminary experiments, we find that this method of producing CO2 is a realistic alternative to dry ice and would be highly suitable for remote field work.
Development of a terminally sterilised decellularised dermis.
Hogg, P; Rooney, P; Leow-Dyke, S; Brown, C; Ingham, E; Kearney, J N
2015-09-01
Many of the decellularised dermis products on the market at present are aspectically produced. NHS Blood and Transplant Tissue Services have developed a method of producing a dCELL human dermis which has been terminally sterilised by gamma irradiation. The terminally sterilised decellularised dermis was compared with cellular tissue and examined for histology, residual DNA content, biomechanical and biochemical properties, in vitro cytotoxicity and in vivo implantation in a mouse model. No alterations in morphology as viewed by light microscopy were observed and DNA removal was 99%. There were no significant changes in ultimate tensile stress or evidence for collagen denaturation or cytotoxicity. The in vivo studies did not indicate any adverse tissue reactions in the mouse model and demonstrated incorporation of dCELL human dermis into the host. Decellularisation, followed by terminal sterilisation with gamma irradiation, is an appropriate method to produce a human dermis allograft material suitable for transplantation.
Smelting Magnesium Metal using a Microwave Pidgeon Method
Wada, Yuji; Fujii, Satoshi; Suzuki, Eiichi; Maitani, Masato M.; Tsubaki, Shuntaro; Chonan, Satoshi; Fukui, Miho; Inazu, Naomi
2017-01-01
Magnesium (Mg) is a lightweight metal with applications in transportation and sustainable battery technologies, but its current production through ore reduction using the conventional Pidgeon process emits large amounts of CO2 and particulate matter (PM2.5). In this work, a novel Pidgeon process driven by microwaves has been developed to produce Mg metal with less energy consumption and no direct CO2 emission. An antenna structure consisting of dolomite as the Mg source and a ferrosilicon antenna as the reducing material was used to confine microwave energy emitted from a magnetron installed in a microwave oven to produce a practical amount of pure Mg metal. This microwave Pidgeon process with an antenna configuration made it possible to produce Mg with an energy consumption of 58.6 GJ/t, corresponding to a 68.6% reduction when compared to the conventional method. PMID:28401910
Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.
Germino, Mary; Carson, Richard E
2018-02-01
Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan
2018-02-01
As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.
Low-power triggered data acquisition system and method
NASA Technical Reports Server (NTRS)
Champaigne, Kevin (Inventor); Sumners, Jonathan (Inventor)
2012-01-01
A low-power triggered data acquisition system and method utilizes low-powered circuitry, comparators, and digital logic incorporated into a miniaturized device interfaced with self-generating transducer sensor inputs to detect, identify and assess impact and damage to surfaces and structures wherein, upon the occurrence of a triggering event that produces a signal greater than a set threshold changes the comparator output and causes the system to acquire and store digital data representative of the incoming waveform on at least one triggered channel. The sensors may be disposed in an array to provide triangulation and location of the impact.
Yoshimura, Tomoaki; Kuribara, Hideo; Kodama, Takashi; Yamata, Seiko; Futo, Satoshi; Watanabe, Satoshi; Aoki, Nobutaro; Iizuka, Tayoshi; Akiyama, Hiroshi; Maitani, Tamio; Naito, Shigehiro; Hino, Akihiro
2005-03-23
Seven types of processed foods, namely, cornstarch, cornmeal, corn puffs, corn chips, tofu, soy milk, and boiled beans, were trial produced from 1 and 5% (w/w) genetically modified (GM) mixed raw materials. In this report, insect resistant maize (MON810) and herbicide tolerant soy (Roundup Ready soy, 40-3-2) were used as representatives of GM maize and soy, respectively. Deoxyribonucleic acid (DNA) was extracted from the raw materials and the trial-produced processed food using two types of methods, i.e., the silica membrane method and the anion exchange method. The GM% values of these samples were quantified, and the significant differences between the raw materials and the trial-produced processed foods were statistically confirmed. There were some significant differences in the comparisons of all processed foods. However, our quantitative methods could be applied as a screening assay to tofu and soy milk because the differences in GM% between the trial-produced processed foods and their raw materials were lower than 13 and 23%, respectively. In addition, when quantitating with two primer pairs (SSIIb 3, 114 bp; SSIIb 4, 83 bp for maize and Le1n02, 118 bp; Le1n03, 89 bp for soy), which were targeted within the same taxon specific DNA sequence with different amplicon sizes, the ratios of the copy numbers of the two primer pairs (SSIIb 3/4 and Le1n02/03) decreased with time in a heat-treated processing model using an autoclave. In this report, we suggest that the degradation level of DNA in processed foods could be estimated from these ratios, and the probability of GM quantification could be experimentally predicted from the results of the trial producing.
Balance Contrast Enhancement using piecewise linear stretching
NASA Astrophysics Data System (ADS)
Rahavan, R. V.; Govil, R. C.
1993-04-01
Balance Contrast Enhancement is one of the techniques employed to produce color composites with increased color contrast. It equalizes the three images used for color composition in range and mean. This results in a color composite with large variation in hue. Here, it is shown that piecewise linear stretching can be used for performing the Balance Contrast Enhancement. In comparison with the Balance Contrast Enhancement Technique using parabolic segment as transfer function (BCETP), the method presented here is algorithmically simple, constraint-free and produces comparable results.
Hippeläinen, Eero; Mäkelä, Teemu; Kaasalainen, Touko; Kaleva, Erna
2017-12-01
Developments in single photon emission tomography instrumentation and reconstruction methods present a potential for decreasing acquisition times. One of such recent options for myocardial perfusion imaging (MPI) is IQ-SPECT. This study was motivated by the inconsistency in the reported ejection fraction (EF) and left ventricular (LV) volume results between IQ-SPECT and more conventional low-energy high-resolution (LEHR) collimation protocols. IQ-SPECT and LEHR quantitative results were compared while the equivalent number of iterations (EI) was varied. The end-diastolic (EDV) and end-systolic volumes (ESV) and the derived EF values were investigated. A dynamic heart phantom was used to produce repeatable ESVs, EDVs and EFs. Phantom performance was verified by comparing the set EF values to those measured from a gated multi-slice X-ray computed tomography (CT) scan (EF True ). The phantom with an EF setting of 45, 55, 65 and 70% was imaged with both IQ-SPECT and LEHR protocols. The data were reconstructed with different EI, and two commonly used clinical myocardium delineation software were used to evaluate the LV volumes. The CT verification showed that the phantom EF settings were repeatable and accurate with the EF True being within 1% point from the manufacture's nominal value. Depending on EI both MPI protocols can be made to produce correct EF estimates, but IQ-SPECT protocol produced on average 41 and 42% smaller EDV and ESV when compared to the phantom's volumes, while LEHR protocol underestimated volumes by 24 and 21%, respectively. The volume results were largely similar between the delineation methods used. The reconstruction parameters can greatly affect the volume estimates obtained from perfusion studies. IQ-SPECT produces systematically smaller LV volumes than the conventional LEHR MPI protocol. The volume estimates are also software dependent.
Simulating changes to emergency care resources to compare system effectiveness.
Branas, Charles C; Wolff, Catherine S; Williams, Justin; Margolis, Gregg; Carr, Brendan G
2013-08-01
To apply systems optimization methods to simulate and compare the most effective locations for emergency care resources as measured by access to care. This study was an optimization analysis of the locations of trauma centers (TCs), helicopter depots (HDs), and severely injured patients in need of time-critical care in select US states. Access was defined as the percentage of injured patients who could reach a level I/II TC within 45 or 60 minutes. Optimal locations were determined by a search algorithm that considered all candidate sites within a set of existing hospitals and airports in finding the best solutions that maximized access. Across a dozen states, existing access to TCs within 60 minutes ranged from 31.1% to 95.6%, with a mean of 71.5%. Access increased from 0.8% to 35.0% after optimal addition of one or two TCs. Access increased from 1.0% to 15.3% after optimal addition of one or two HDs. Relocation of TCs and HDs (optimal removal followed by optimal addition) produced similar results. Optimal changes to TCs produced greater increases in access to care than optimal changes to HDs although these results varied across states. Systems optimization methods can be used to compare the impacts of different resource configurations and their possible effects on access to care. These methods to determine optimal resource allocation can be applied to many domains, including comparative effectiveness and patient-centered outcomes research. Copyright © 2013 Elsevier Inc. All rights reserved.
Quintero-Fong, L; Toledo, J; Ruiz, L; Rendón, P; Orozco-Dávila, D; Cruz, L; Liedo, P
2016-10-01
The sexual performance of Anastrepha ludens males of the Tapachula-7 genetic sexing strain, produced via selection based on mating success, was compared with that of males produced without selection in competition with wild males. Mating competition, development time, survival, mass-rearing quality parameters and pheromone production were compared. The results showed that selection based on mating competitiveness significantly improved the sexual performance of offspring. Development time, survival of larvae, pupae and adults, and weights of larvae and pupae increased with each selection cycle. Differences in the relative quantity of the pheromone compounds (Z)-3-nonenol and anastrephin were observed when comparing the parental males with the F4 and wild males. The implications of this colony management method on the sterile insect technique are discussed.
[Acrylamide in potato crisps and snack foods produced in the autonomous Community of Valencia [Spain
Zubeldia Lauzurica, Lourdes; Gomar Fayos, Josefa
2007-01-01
To evaluate acrylamide content in potato crisps and snack foods produced in the Valencian Community and to compare the results with those published by the main food safety organizations. Twenty-four samples of potato crisps and 15 samples of snack foods were analyzed. The results were compared with those from the Food and Drug Administration and the European Food Safety Authority. The mean (SD) acrylamide level in the Valencian Community was 916 (656) microg/kg in potato crisps and 262 (346) microg/kg in snack foods. Significant differences were found in the 3 populations compared. Acrylamide levels in potato crisps in the Valencian Community were the highest. There was wide variation in acrylamide content for the same type of product. Further investigation into methods of sampling and analysis and steps to reduce acrylamide levels are required.
Batchwise dyeing of bamboo cellulose fabric with reactive dye using ultrasonic energy.
Larik, Safdar Ali; Khatri, Awais; Ali, Shamshad; Kim, Seong Hun
2015-05-01
Bamboo is a regenerated cellulose fiber usually dyed with reactive dyes. This paper presents results of the batchwise dyeing of bamboo fabric with reactive dyes by ultrasonic (US) and conventional (CN) dyeing methods. The study was focused at comparing the two methods for dyeing results, chemicals, temperature and time, and effluent quality. Two widely used dyes, CI Reactive Black 5 (bis-sulphatoethylsulphone) and CI Reactive Red 147 (difluorochloropyrimidine) were used in the study. The US dyeing method produced around 5-6% higher color yield (K/S) in comparison to the CN dyeing method. A significant savings in terms of fixation temperature (10°C) and time (15 min), and amounts of salt (10 g/L) and alkali (0.5-1% on mass of fiber) was realized. Moreover, the dyeing effluent showed considerable reductions in the total dissolved solids content (minimum around 29%) and in the chemical oxygen demand (minimum around 13%) for the US dyebath in comparison to the CN dyebath. The analysis of colorfastness tests demonstrated similar results by US and CN dyeing methods. A microscopic examination on the field emission scanning electron microscope revealed that the US energy did not alter the surface morphology of the bamboo fibers. It was concluded that the US dyeing of bamboo fabric produces better dyeing results and is a more economical and environmentally sustainable method as compared to CN dyeing method. Copyright © 2014 Elsevier B.V. All rights reserved.
A laboratory comparison of evacuation devices on aerosol reduction.
Jacks, Mary E
2002-01-01
Aerosols are defined as airborne particles that range in size from 0.5 to 10 microns (micron). They are produced during ultrasonic instrumentation, but they can be reduced. Irrigant solutions, which produce the therapeutic effects of lavage, also combine with blood, saliva, and bacteria to produce potentially harmful airborne particulates. The American Dental Association (ADA) and the Centers for Disease Control and Prevention (CDC) recommend utilization of high volume evacuation, rubber dam, and patient positioning for aerosol control. But for the non-assisted dental hygienist, these recommendations are difficult to implement. This study was designed to compare the concentration of airborne particulates from ultrasonic scaling, utilizing three different methods of evacuation. In a laboratory setting, ultrasonic airborne particulates were generated utilizing a 25,000 cps magnetostrictive ultrasonic scaling instrument. Three evacuation devises were compared for effectiveness: a standard saliva ejector intraorally positioned; and two extraorally positioned, hands-free high-volume evacuation (HFHVE) techniques. One of these devices had a standard attachment, and, the other had a funnel-shaped attachment. Measurement of airborne particles was performed with a DataRAM Real-Time Aerosol Monitor. This study (N = 21) found a significant reduction in the number of airborne particulates with either form of extraoral HFHVE attachment in place. Standard attachments and funnel-shaped attachments to HFHVE resulted in reduction of particulates by 90.8% and 89.7%, respectively, when compared to the intraorally positioned standard saliva ejector. Utilizing either form of HFHVE during ultrasonic instrumentation significantly reduced the number of aerosolized particulates that reached the breathing space of the client and clinician. This lends support for the ADA and CDC recommendation that HVE be used during aerosol producing procedures. Currently, no preventive measure is 100% effective; therefore, clinicians are encouraged to use additional methods to minimize the number of airborne particulates produced during intraoral instrumentation.
Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.
Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok
2018-06-01
To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.
Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet
2014-01-01
Aim: Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. Methods: DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. Results: The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Conclusion: Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE. PMID:24688314
de la Torre, Xavier; Colamonici, Cristiana; Curcio, Davide; Molaioni, Francesco; Pizzardi, Marta; Botrè, Francesco
2011-04-01
Nandrolone and/or its precursors are included in the World Anti-doping Agency (WADA) list of forbidden substances and methods and as such their use is banned in sport. 19-Norandrosterone (19-NA) the main metabolite of these compounds can also be produced endogenously. The need to establish the origin of 19-NA in human urine samples obliges the antidoping laboratories to use isotope ratio mass spectrometry (IRMS) coupled to gas chromatography (GC/C/IRMS). In this work a simple liquid chromatographic method without any additional derivatization step is proposed, allowing to drastically simplify the urine pretreatment procedure, leading to extracts free of interferences permitting precise and accurate IRMS analysis. The purity of the extracts was verified by parallel analysis by gas chromatography coupled to mass spectrometry with GC conditions identical to those of the GC/C/IRMS assay. The method has been validated according to ISO17025 requirements (within assay precision of ±0.3‰ and between assay precision of ±0.4‰). The method has been tested with samples obtained after the administration of synthetic 19-norandrostenediol and samples collected during pregnancy where 19-NA is known to be produced endogenously. Twelve drugs and synthetic standards able to produce through metabolism 19-NA have shown to present δ(13)C values around -29‰ being quite homogeneous (-28.8±1.5; mean±standard deviation) while endogenously produced 19-NA has shown values comparable to other endogenous produced steroids in the range -21 to -24‰ as already reported. The efficacy of the method was tested on real samples from routine antidoping analyses. Copyright © 2011 Elsevier Inc. All rights reserved.
Measuring signal-to-noise ratio in partially parallel imaging MRI
Goerner, Frank L.; Clarke, Geoffrey D.
2011-01-01
Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049
Kuan, Chee-Hao; Rukayadi, Yaya; Ahmad, Siti H.; Wan Mohamed Radzi, Che W. J.; Thung, Tze-Young; Premarathne, Jayasekara M. K. J. K.; Chang, Wei-San; Loo, Yuet-Ying; Tan, Chia-Wanq; Ramzi, Othman B.; Mohd Fadzil, Siti N.; Kuan, Chee-Sian; Yeo, Siok-Koon; Nishibuchi, Mitsuaki; Radu, Son
2017-01-01
Given the remarkable increase of public interest in organic food products, it is indeed critical to evaluate the microbiological risk associated with consumption of fresh organic produce. Organic farming practices including the use of animal manures may increase the risk of microbiological contamination as manure can act as a vehicle for transmission of foodborne pathogens. This study aimed to determine and compare the microbiological status between organic and conventional fresh produce at the retail level in Malaysia. A total of 152 organic and conventional vegetables were purchased at retail markets in Malaysia. Samples were analyzed for mesophilic aerobic bacteria, yeasts and molds, and total coliforms using conventional microbiological methods. Combination methods of most probable number-multiplex polymerase chain reaction (MPN-mPCR) were used to detect and quantify foodborne pathogens, including Escherichia coli O157:H7, Shiga toxin-producing E. coli (STEC), Listeria monocytogenes, Salmonella Typhimurium, and Salmonella Enteritidis. Results indicated that most types of organic and conventional vegetables possessed similar microbial count (P > 0.05) of mesophilic aerobic bacteria, yeasts and molds, and total coliforms. E. coli O157:H7 and S. Typhimurium were not detected in any sample analyzed in this study. Among the 152 samples tested, only the conventional lettuce and organic carrot were tested positive for STEC and S. Enteritidis, respectively. L. monocytogenes were more frequently detected in both organic (9.1%) and conventional vegetables (2.7%) as compared to E. coli O157:H7, S. Typhimurium, and S. Enteritidis. Overall, no trend was shown that either organically or conventionally grown vegetables have posed greater microbiological risks. These findings indicated that one particular type of farming practices would not affect the microbiological profiles of fresh produce. Therefore, regardless of farming methods, all vegetables should be subjected to appropriate post-harvest handling practices from farm to fork to ensure the quality and safety of the fresh produce. PMID:28824567
Kuan, Chee-Hao; Rukayadi, Yaya; Ahmad, Siti H; Wan Mohamed Radzi, Che W J; Thung, Tze-Young; Premarathne, Jayasekara M K J K; Chang, Wei-San; Loo, Yuet-Ying; Tan, Chia-Wanq; Ramzi, Othman B; Mohd Fadzil, Siti N; Kuan, Chee-Sian; Yeo, Siok-Koon; Nishibuchi, Mitsuaki; Radu, Son
2017-01-01
Given the remarkable increase of public interest in organic food products, it is indeed critical to evaluate the microbiological risk associated with consumption of fresh organic produce. Organic farming practices including the use of animal manures may increase the risk of microbiological contamination as manure can act as a vehicle for transmission of foodborne pathogens. This study aimed to determine and compare the microbiological status between organic and conventional fresh produce at the retail level in Malaysia. A total of 152 organic and conventional vegetables were purchased at retail markets in Malaysia. Samples were analyzed for mesophilic aerobic bacteria, yeasts and molds, and total coliforms using conventional microbiological methods. Combination methods of most probable number-multiplex polymerase chain reaction (MPN-mPCR) were used to detect and quantify foodborne pathogens, including Escherichia coli O157:H7, Shiga toxin-producing E. coli (STEC), Listeria monocytogenes, Salmonella Typhimurium, and Salmonella Enteritidis. Results indicated that most types of organic and conventional vegetables possessed similar microbial count ( P > 0.05) of mesophilic aerobic bacteria, yeasts and molds, and total coliforms. E. coli O157:H7 and S . Typhimurium were not detected in any sample analyzed in this study. Among the 152 samples tested, only the conventional lettuce and organic carrot were tested positive for STEC and S . Enteritidis, respectively. L. monocytogenes were more frequently detected in both organic (9.1%) and conventional vegetables (2.7%) as compared to E. coli O157:H7, S . Typhimurium, and S . Enteritidis. Overall, no trend was shown that either organically or conventionally grown vegetables have posed greater microbiological risks. These findings indicated that one particular type of farming practices would not affect the microbiological profiles of fresh produce. Therefore, regardless of farming methods, all vegetables should be subjected to appropriate post-harvest handling practices from farm to fork to ensure the quality and safety of the fresh produce.
A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, B.B.; Ertekin, R.C.; College of Shipbuilding Engineering, Harbin Engineering University, 150001 Harbin
2015-02-15
This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green–Naghdi (GN) equations and the Irrotational Green–Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green–Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at differentmore » levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.« less
A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations
NASA Astrophysics Data System (ADS)
Zhao, B. B.; Ertekin, R. C.; Duan, W. Y.
2015-02-01
This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green-Naghdi (GN) equations and the Irrotational Green-Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green-Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at different levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.
ERIC Educational Resources Information Center
Russell, Richard K.; Lent, Robert W.
1982-01-01
Compared the efficacy of two behavioral anxiety reduction techniques against "subconscious reconditioning," an empirically derived placebo method. Examination of within-group changes showed systematic desensitization produced significant reductions in test and trait anxiety, and remaining treatments and the placebo demonstrated…
TEMPORAL VARIABILITY OF ENTEROCOCCI SPECIES IN STREAMS IMPACTED BY CATTLE FECAL CONTAMINATION
Temporal variability in the gastrointestinal flora of animals impacting water resources with fecal material can be one of the factors producing low source identification rates when applying microbial source tracking (MST) methods. Our objective is to identify and compare the temp...
Development of methods for assessing exposure and effects of produced waters from energy and mineral resource extraction operations on stream invertebrate species is important in order to elucidate environmentally relevant information. Centroptilum triangulifer is a parthenogene...
Methods for pretreating biomass
Balan, Venkatesh; Dale, Bruce E; Chundawat, Shishir; Sousa, Leonardo
2017-05-09
A method for pretreating biomass is provided, which includes, in a reactor, allowing gaseous ammonia to condense on the biomass and react with water present in the biomass to produce pretreated biomass, wherein reactivity of polysaccharides in the biomass is increased during subsequent biological conversion as compared to the reactivity of polysaccharides in biomass which has not been pretreated. A method for pretreating biomass with a liquid ammonia and recovering the liquid ammonia is also provided. Related systems which include a biochemical or biofuel production facility are also disclosed.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Berry, M. L..; Grieme, M.
We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.
NASA Astrophysics Data System (ADS)
Shulga, A. V.
2013-03-01
The ring tensile test method was optimized and successfully used to obtain precise data for specimens of the cladding tubes of AISI type 316 austenitic stainless steels and ferritic-martensitic stainless steel. The positive modifications in the tensile properties of the stainless steel cladding tubes fabricated by powder metallurgy and hot isostatic pressing of melt atomized powders (PM HIP) when compared with the cladding tubes produced by traditional technology were found. Presently, PM HIP is also used in the fabrication of oxide dispersion strengthened (ODS) ferritic-martensitic steels. The high degree of homogeneity of the distribution of carbon and boron as well the high dispersivity of the phase-structure elements in the specimens manufactured via PM HIP were determined by direct autoradiography methods. These results correlate well with the increase of the tensile properties of the specimens produced by PM HIP technology.
Memon, Abdul Hakeem; Hamil, Mohammad Shahrul Ridzuan; Laghari, Madeeha; Rithwan, Fahim; Zhari, Salman; Saeed, Mohammed Ali Ahmed; Ismail, Zhari; Majid, Amin Malik Shah Abdul
2016-09-01
Syzygium campanulatum Korth is a plant, which is a rich source of secondary metabolites (especially flavanones, chalcone, and triterpenoids). In our present study, three conventional solvent extraction (CSE) techniques and supercritical fluid extraction (SFE) techniques were performed to achieve a maximum recovery of two flavanones, chalcone, and two triterpenoids from S. campanulatum leaves. Furthermore, a Box-Behnken design was constructed for the SFE technique using pressure, temperature, and particle size as independent variables, and yields of crude extract, individual and total secondary metabolites as the dependent variables. In the CSE procedure, twenty extracts were produced using ten different solvents and three techniques (maceration, soxhletion, and reflux). An enriched extract of five secondary metabolites was collected using n-hexane:methanol (1:1) soxhletion. Using food-grade ethanol as a modifier, the SFE methods produced a higher recovery (25.5%‒84.9%) of selected secondary metabolites as compared to the CSE techniques (0.92%‒66.00%).
Memon, Abdul Hakeem; Hamil, Mohammad Shahrul Ridzuan; Laghari, Madeeha; Rithwan, Fahim; Zhari, Salman; Saeed, Mohammed Ali Ahmed; Ismail, Zhari; Majid, Amin Malik Shah Abdul
2016-01-01
Syzygium campanulatum Korth is a plant, which is a rich source of secondary metabolites (especially flavanones, chalcone, and triterpenoids). In our present study, three conventional solvent extraction (CSE) techniques and supercritical fluid extraction (SFE) techniques were performed to achieve a maximum recovery of two flavanones, chalcone, and two triterpenoids from S. campanulatum leaves. Furthermore, a Box-Behnken design was constructed for the SFE technique using pressure, temperature, and particle size as independent variables, and yields of crude extract, individual and total secondary metabolites as the dependent variables. In the CSE procedure, twenty extracts were produced using ten different solvents and three techniques (maceration, soxhletion, and reflux). An enriched extract of five secondary metabolites was collected using n-hexane:methanol (1:1) soxhletion. Using food-grade ethanol as a modifier, the SFE methods produced a higher recovery (25.5%‒84.9%) of selected secondary metabolites as compared to the CSE techniques (0.92%‒66.00%). PMID:27604860
Nicol, Scott; Thompson, Shirley
2007-06-01
Today, over-consumption, pollution and resource depletion threaten sustainability. Waste management policies frequently fail to reduce consumption, prevent pollution, conserve resources and foster sustainable products. However, waste policies are changing to focus on lifecycle impacts of products from the cradle to the grave by extending the responsibilities of stakeholders to post-consumer management. Product stewardship and extended producer responsibility are two policies in use, with radically different results when compared for one consumer product, refrigerators. North America has enacted product stewardship policies that fail to require producers to take physical or financial responsibility for recycling or for environmentally sound disposal, so that releases of ozone depleting substances routinely occur, which contribute to the expanding the ozone hole. Conversely, Europe's Waste Electrical and Electronic Equipment (WEEE) Directive requires extended producer responsibility, whereby producers collect and manage their own post-consumer waste products. WEEE has resulted in high recycling rates of greater than 85%, reduced emissions of ozone-depleting substances and other toxins, greener production methods, such as replacing greenhouse gas refrigerants with environmentally friendly hydrocarbons and more reuse of refrigerators in the EU in comparison with North America.
Methanogenic activity tests by Infrared Tunable Diode Laser Absorption Spectroscopy.
Martinez-Cruz, Karla; Sepulveda-Jauregui, Armando; Escobar-Orozco, Nayeli; Thalasso, Frederic
2012-10-01
Methanogenic activity (MA) tests are commonly carried out to estimate the capability of anaerobic biomass to treat effluents, to evaluate anaerobic activity in bioreactors or natural ecosystems, or to quantify inhibitory effects on methanogenic activity. These activity tests are usually based on the measurement of the volume of biogas produced by volumetric, pressure increase or gas chromatography (GC) methods. In this study, we present an alternative method for non-invasive measurement of methane produced during activity tests in closed vials, based on Infrared Tunable Diode Laser Absorption Spectroscopy (MA-TDLAS). This new method was tested during model acetoclastic and hydrogenotrophic methanogenic activity tests and was compared to a more traditional method based on gas chromatography. From the results obtained, the CH(4) detection limit of the method was estimated to 60 ppm and the minimum measurable methane production rate was estimated to 1.09(.)10(-3) mg l(-1) h(-1), which is below CH(4) production rate usually reported in both anaerobic reactors and natural ecosystems. Additionally to sensitivity, the method has several potential interests compared to more traditional methods among which short measurements time allowing the measurement of a large number of MA test vials, non-invasive measurements avoiding leakage or external interferences and similar cost to GC based methods. It is concluded that MA-TDLAS is a promising method that could be of interest not only in the field of anaerobic digestion but also, in the field of environmental ecology where CH(4) production rates are usually very low. Copyright © 2012 Elsevier B.V. All rights reserved.
Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, E. L.; Molod, A.
2014-01-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-03-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Coggins, Christopher R E; Merski, Jerome A; Oldham, Michael J
2013-01-01
Recent technological advances allow ventilation holes in (or adjacent to) cigarette filters to be produced using lasers instead of using the mechanical procedures of earlier techniques. Analytical chemistry can be used to compare the composition of mainstream smoke from experimental cigarettes having filters with mechanically produced ventilation holes to that of cigarettes with ventilation holes that were produced using laser technology. Established procedures were used to analyze the smoke composition of 38 constituents of mainstream smoke generated using standard conditions. There were no differences between the smoke composition of cigarettes with filter ventilation holes that were produced mechanically or through use of laser technology. The two methods for producing ventilation holes in cigarette filters are equivalent in terms of resulting mainstream smoke chemistry, at two quite different filter ventilation percentages.
The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter
NASA Technical Reports Server (NTRS)
Townsend, Barbara K.
1987-01-01
A control-system design method, quadratic optimal cooperative control synthesis (CCS), is applied to the design of a stability and control augmentation system (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design method, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and linear quadratic regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.
Determination and discrimination of biodiesel fuels by gas chromatographic and chemometric methods
NASA Astrophysics Data System (ADS)
Milina, R.; Mustafa, Z.; Bojilov, D.; Dagnon, S.; Moskovkina, M.
2016-03-01
Pattern recognition method (PRM) was applied to gas chromatographic (GC) data for a fatty acid methyl esters (FAME) composition of commercial and laboratory synthesized biodiesel fuels from vegetable oils including sunflower, rapeseed, corn and palm oils. Two GC quantitative methods to calculate individual fames were compared: Area % and internal standard. The both methods were applied for analysis of two certified reference materials. The statistical processing of the obtained results demonstrates the accuracy and precision of the two methods and allows them to be compared. For further chemometric investigations of biodiesel fuels by their FAME-profiles any of those methods can be used. PRM results of FAME profiles of samples from different vegetable oils show a successful recognition of biodiesels according to the feedstock. The information obtained can be used for selection of feedstock to produce biodiesels with certain properties, for assessing their interchangeability, for fuel spillage and remedial actions in the environment.
Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.
Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki
2018-03-01
To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .
Forecasting electricity usage using univariate time series models
NASA Astrophysics Data System (ADS)
Hock-Eam, Lim; Chee-Yin, Yip
2014-12-01
Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.
A new method for testing the scale-factor performance of fiber optical gyroscope
NASA Astrophysics Data System (ADS)
Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin
2015-10-01
Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.
Methodological comparison of alpine meadow evapotranspiration on the Tibetan Plateau, China.
Chang, Yaping; Wang, Jie; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Liu, Fengjing; Zhang, Shiqiang
2017-01-01
Estimation of evapotranspiration (ET) for alpine meadow areas in the Tibetan Plateau (TP) is essential for water resource management. However, observation data has been limited due to the extreme climates and complex terrain of this region. To address these issues, four representative methods, Penman-Monteith (PM), Priestley-Taylor (PT), Hargreaves-Samani (HS), and Mahringer (MG) methods, were adopted to estimate ET, which were then compared with ET measured using Eddy Covariance (EC) for five alpine meadow sites during the growing seasons from 2010 to 2014. And each site was measured for one growing season during this period. The results demonstrate that the PT method outperformed at all sites with a coefficient of determination (R2) ranging from 0.76 to 0.94 and root mean square error (RMSE) ranging from 0.41 to 0.62 mm d-1. The PM method showed better performance than HS and MG methods, and the HS method produced relatively acceptable results with higher R2 (0.46) and lower RMSE (0.89 mm d-1) compared to MG method with R2 of 0.16 and RMSE of 1.62 mm d-1, while MG underestimated ET at all alpine meadow sites. Therefore, the PT method, being the simpler approach and less data dependent, is recommended to estimate ET for alpine meadow areas in the Tibetan Plateau. The PM method produced reliable results when available data were sufficient, and the HS method proved to be a complementary method when variables were insufficient. On the contrary, the MG method always underestimated ET and is, thus, not suitable for alpine meadows. These results provide a basis for estimating ET on the Tibetan Plateau for annual data collection, analysis, and future studies.
Methodological comparison of alpine meadow evapotranspiration on the Tibetan Plateau, China
Chang, Yaping; Wang, Jie; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Liu, Fengjing
2017-01-01
Estimation of evapotranspiration (ET) for alpine meadow areas in the Tibetan Plateau (TP) is essential for water resource management. However, observation data has been limited due to the extreme climates and complex terrain of this region. To address these issues, four representative methods, Penman-Monteith (PM), Priestley-Taylor (PT), Hargreaves-Samani (HS), and Mahringer (MG) methods, were adopted to estimate ET, which were then compared with ET measured using Eddy Covariance (EC) for five alpine meadow sites during the growing seasons from 2010 to 2014. And each site was measured for one growing season during this period. The results demonstrate that the PT method outperformed at all sites with a coefficient of determination (R2) ranging from 0.76 to 0.94 and root mean square error (RMSE) ranging from 0.41 to 0.62 mm d-1. The PM method showed better performance than HS and MG methods, and the HS method produced relatively acceptable results with higher R2 (0.46) and lower RMSE (0.89 mm d-1) compared to MG method with R2 of 0.16 and RMSE of 1.62 mm d-1, while MG underestimated ET at all alpine meadow sites. Therefore, the PT method, being the simpler approach and less data dependent, is recommended to estimate ET for alpine meadow areas in the Tibetan Plateau. The PM method produced reliable results when available data were sufficient, and the HS method proved to be a complementary method when variables were insufficient. On the contrary, the MG method always underestimated ET and is, thus, not suitable for alpine meadows. These results provide a basis for estimating ET on the Tibetan Plateau for annual data collection, analysis, and future studies. PMID:29236754
A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.
Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa
2017-04-01
Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nonparametric Subgroup Identification by PRIM and CART: A Simulation and Application Study
2017-01-01
Two nonparametric methods for the identification of subgroups with outstanding outcome values are described and compared to each other in a simulation study and an application to clinical data. The Patient Rule Induction Method (PRIM) searches for box-shaped areas in the given data which exceed a minimal size and average outcome. This is achieved via a combination of iterative peeling and pasting steps, where small fractions of the data are removed or added to the current box. As an alternative, Classification and Regression Trees (CART) prediction models perform sequential binary splits of the data to produce subsets which can be interpreted as subgroups of heterogeneous outcome. PRIM and CART were compared in a simulation study to investigate their strengths and weaknesses under various data settings, taking different performance measures into account. PRIM was shown to be superior in rather complex settings such as those with few observations, a smaller signal-to-noise ratio, and more than one subgroup. CART showed the best performance in simpler situations. A practical application of the two methods was illustrated using a clinical data set. For this application, both methods produced similar results but the higher amount of user involvement of PRIM became apparent. PRIM can be flexibly tuned by the user, whereas CART, although simpler to implement, is rather static. PMID:28611849
Takikawa, Satoshi; Bauer, Thomas W; Kambic, Helen; Togawa, Daisuke
2003-04-01
In the United States, demineralized bone matrix (DBM) is considered a transplantable tissue and therefore is regulated primarily by the American Association of Tissue Banks. Even though DBM is not subjected to the same regulations relative to performance claims as medical devices are, one would expect different processing methods might yield DBM preparations of different osteoinductive potential. The purpose of this study was to use an established athymic rat model to compare the osteoinductive properties of two commercially available human DBMs prepared using different methods but having essentially identical product claims. Sixteen female athymic rats were used to test equivalent volumes of two lots each of Grafton Putty (Osteotech, Inc., Eatontown, NJ), Osteofil (Regeneration Technologies, Inc., Alachua, FL), and rat DBM. At 28 days after implantation, qualitative and semiquantitative microscopy showed no significant differences in bone formation between the two lots from each source, but rat DBM produced significantly more bone than Grafton, which produced significantly more bone than Osteofil. Our results suggest that methods of graft processing may represent a greater source of variability than do differences among individual donors. Whether these differences relate to methods of demineralization, carrier, dose of DBM per volume, or to some other factor remains to be determined. Copyright 2003 Wiley Periodicals, Inc.
Nonparametric Subgroup Identification by PRIM and CART: A Simulation and Application Study.
Ott, Armin; Hapfelmeier, Alexander
2017-01-01
Two nonparametric methods for the identification of subgroups with outstanding outcome values are described and compared to each other in a simulation study and an application to clinical data. The Patient Rule Induction Method (PRIM) searches for box-shaped areas in the given data which exceed a minimal size and average outcome. This is achieved via a combination of iterative peeling and pasting steps, where small fractions of the data are removed or added to the current box. As an alternative, Classification and Regression Trees (CART) prediction models perform sequential binary splits of the data to produce subsets which can be interpreted as subgroups of heterogeneous outcome. PRIM and CART were compared in a simulation study to investigate their strengths and weaknesses under various data settings, taking different performance measures into account. PRIM was shown to be superior in rather complex settings such as those with few observations, a smaller signal-to-noise ratio, and more than one subgroup. CART showed the best performance in simpler situations. A practical application of the two methods was illustrated using a clinical data set. For this application, both methods produced similar results but the higher amount of user involvement of PRIM became apparent. PRIM can be flexibly tuned by the user, whereas CART, although simpler to implement, is rather static.
Robust rotational-velocity-Verlet integration methods.
Rozmanov, Dmitri; Kusalik, Peter G
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions
2015-01-01
We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Berke, L.; Gallagher, R. H.
1991-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EEs) are integrated with the global compatibility conditions (CCs) to form the governing set of equations. In IFM the CCs are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.
1990-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
New Tools for Comparing Microscopy Images: Quantitative Analysis of Cell Types in Bacillus subtilis
van Gestel, Jordi; Vlamakis, Hera
2014-01-01
Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed. PMID:25448819
New tools for comparing microscopy images: quantitative analysis of cell types in Bacillus subtilis.
van Gestel, Jordi; Vlamakis, Hera; Kolter, Roberto
2015-02-15
Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
A continuous scale-space method for the automated placement of spot heights on maps
NASA Astrophysics Data System (ADS)
Rocca, Luigi; Jenny, Bernhard; Puppo, Enrico
2017-12-01
Spot heights and soundings explicitly indicate terrain elevation on cartographic maps. Cartographers have developed design principles for the manual selection, placement, labeling, and generalization of spot height locations, but these processes are work-intensive and expensive. Finding an algorithmic criterion that matches the cartographers' judgment in ranking the significance of features on a terrain is a difficult endeavor. This article proposes a method for the automated selection of spot heights locations representing natural features such as peaks, saddles and depressions. A lifespan of critical points in a continuous scale-space model is employed as the main measure of the importance of features, and an algorithm and a data structure for its computation are described. We also introduce a method for the comparison of algorithmically computed spot height locations with manually produced reference compilations. The new method is compared with two known techniques from the literature. Results show spot height locations that are closer to reference spot heights produced manually by swisstopo cartographers, compared to previous techniques. The introduced method can be applied to elevation models for the creation of topographic and bathymetric maps. It also ranks the importance of extracted spot height locations, which allows for a variation in the size of symbols and labels according to the significance of represented features. The importance ranking could also be useful for adjusting spot height density of zoomable maps in real time.
Methods for producing complex films, and films produced thereby
Duty, Chad E.; Bennett, Charlee J. C.; Moon, Ji -Won; Phelps, Tommy J.; Blue, Craig A.; Dai, Quanqin; Hu, Michael Z.; Ivanov, Ilia N.; Jellison, Jr., Gerald E.; Love, Lonnie J.; Ott, Ronald D.; Parish, Chad M.; Walker, Steven
2015-11-24
A method for producing a film, the method comprising melting a layer of precursor particles on a substrate until at least a portion of the melted particles are planarized and merged to produce the film. The invention is also directed to a method for producing a photovoltaic film, the method comprising depositing particles having a photovoltaic or other property onto a substrate, and affixing the particles to the substrate, wherein the particles may or may not be subsequently melted. Also described herein are films produced by these methods, methods for producing a patterned film on a substrate, and methods for producing a multilayer structure.
NASA Astrophysics Data System (ADS)
Norhazariah, S.; Azura, A. R.; Azahari, B.; Sivakumar, R.
2017-12-01
Semi-refined carrageenan (SRC) product is considerably cheaper and easier to produce as a natural polysaccharide, which was utilized in food and other product application. However, the application in latex is limited. The aim of this work is to evaluate the SRC produced from low industrial grade seaweed (LIGS) in the latex foam application. The FTIR spectra showed the SRC produced as kappa type carrageenan with lower sulfur content compared to native LIGS. NR latex foam is produced by using the Dunlop method with some modifications. The effect of SRC loading as a secondary gelling agent in NR latex foam is investigated. The density and morphology of the NR latex foam with the addition of the SRC are analyzed. NR latex foam density increased with SRC loading and peaked at 1.8 phr SRC. The addition of SRC has induced the bigger cell size compared to the cell size of the control NR latex foam, as shown in the optical micrograph. It can be concluded that SRC LIGS could be acted as secondary gelling agent in NR latex foam.
Weitz, Jochen; Deppe, Herbert; Stopp, Sebastian; Lueth, Tim; Mueller, Steffen; Hohlweg-Majert, Bettina
2011-12-01
The aim of this study is to evaluate the accuracy of a surgical template-aided implant placement produced by rapid prototyping using a DICOM dataset from cone beam computer tomography (CBCT). On the basis of CBCT scans (Sirona® Galileos), a total of ten models were produced using a rapid-prototyping three-dimensional printer. On the same patients, impressions were performed to compare fitting accuracy of both methods. From the models made by impression, templates were produced and accuracy was compared and analyzed with the rapid-prototyping model. Whereas templates made by conventional procedure had an excellent accuracy, the fitting accuracy of those produced by DICOM datasets was not sufficient. Deviations ranged between 2.0 and 3.5 mm, after modification of models between 1.4 and 3.1 mm. The findings of this study suggest that the accuracy of the low-dose Sirona Galileos® DICOM dataset seems to show a high deviation, which is not useable for accurate surgical transfer for example in implant surgery.
Abdollahi, Mehdi; Rezaei, Masoud; Jafarpour, Ali; Undeland, Ingrid
2017-08-15
This study aimed to evaluate how blending pH-shift produced protein isolates from gutted kilka (Clupeonella cultriventris) and silver carp (Hypophthalmichthys molitrix) affected dynamic rheological and chemical properties of the proteins as well as microstructural and physico-mechanical properties of produced gels. Studied variables were protein solubilization pH (acid vs. alkaline) and blending step (before or after protein precipitation). Comparisons were made with conventionally washed minces from kilka and silver carp fillets; either alone or after blending. Rheological studies revealed that blending alkali-produced protein isolates before precipitation resulted in rapid increase of G' reflecting the formation of intermolecular protein-protein interactions with higher rate. Furthermore, blending of alkali-produced protein isolates and washed minces, respectively, of kilka and silver carp improved physico-mechanical properties of the resultant gels compared to pure kilka proteins. However, the pH-shift method showed higher efficacy in development of blend surimi at the same blending ratio compared to the conventional washing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Validation of the ANSR Listeria method for detection of Listeria spp. in environmental samples.
Wendorf, Michael; Feldpausch, Emily; Pinkava, Lisa; Luplow, Karen; Hosking, Edan; Norton, Paul; Biswas, Preetha; Mozola, Mark; Rice, Jennifer
2013-01-01
ANSR Listeria is a new diagnostic assay for detection of Listeria spp. in sponge or swab samples taken from a variety of environmental surfaces. The method is an isothermal nucleic acid amplification assay based on the nicking enzyme amplification reaction technology. Following single-step sample enrichment for 16-24 h, the assay is completed in 40 min, requiring only simple instrumentation. In inclusivity testing, 48 of 51 Listeria strains tested positive, with only the three strains of L. grayi producing negative results. Further investigation showed that L. grayi is reactive in the ANSR assay, but its ability to grow under the selective enrichment conditions used in the method is variable. In exclusivity testing, 32 species of non-Listeria, Gram-positive bacteria all produced negative ANSR assay results. Performance of the ANSR method was compared to that of the U.S. Department of Agriculture-Food Safety and Inspection Service reference culture procedure for detection of Listeria spp. in sponge or swab samples taken from inoculated stainless steel, plastic, ceramic tile, sealed concrete, and rubber surfaces. Data were analyzed using Chi-square and probability of detection models. Only one surface, stainless steel, showed a significant difference in performance between the methods, with the ANSR method producing more positive results. Results of internal trials were supported by findings from independent laboratory testing. The ANSR Listeria method can be used as an accurate, rapid, and simple alternative to standard culture methods for detection of Listeria spp. in environmental samples.
A supertree pipeline for summarizing phylogenetic and taxonomic information for millions of species
Redelings, Benjamin D.
2017-01-01
We present a new supertree method that enables rapid estimation of a summary tree on the scale of millions of leaves. This supertree method summarizes a collection of input phylogenies and an input taxonomy. We introduce formal goals and criteria for such a supertree to satisfy in order to transparently and justifiably represent the input trees. In addition to producing a supertree, our method computes annotations that describe which grouping in the input trees support and conflict with each group in the supertree. We compare our supertree construction method to a previously published supertree construction method by assessing their performance on input trees used to construct the Open Tree of Life version 4, and find that our method increases the number of displayed input splits from 35,518 to 39,639 and decreases the number of conflicting input splits from 2,760 to 1,357. The new supertree method also improves on the previous supertree construction method in that it produces no unsupported branches and avoids unnecessary polytomies. This pipeline is currently used by the Open Tree of Life project to produce all of the versions of project’s “synthetic tree” starting at version 5. This software pipeline is called “propinquity”. It relies heavily on “otcetera”—a set of C++ tools to perform most of the steps of the pipeline. All of the components are free software and are available on GitHub. PMID:28265520
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Electrospun ultra-fine cellulose acetate fibrous mats containing tannic acid-Fe+++ complexes
USDA-ARS?s Scientific Manuscript database
Cellulose acetate (CA) fibrous mats with improved mechanical and antioxidant properties were produced by a simple, scalable and cost-effective electrospinning method. Fibers loaded with small amounts of TA-Fe+++ complexes showed an increase in tensile strength of approximately 117% when compared to ...
NASA Astrophysics Data System (ADS)
Chavarrías, C.; Vaquero, J. J.; Sisniega, A.; Rodríguez-Ruano, A.; Soto-Montenegro, M. L.; García-Barreno, P.; Desco, M.
2008-09-01
We propose a retrospective respiratory gating algorithm to generate dynamic CT studies. To this end, we compared three different methods of extracting the respiratory signal from the projections of small-animal cone-beam computed tomography (CBCT) scanners. Given a set of frames acquired from a certain axial angle, subtraction of their average image from each individual frame produces a set of difference images. Pixels in these images have positive or negative values (according to the respiratory phase) in those areas where there is lung movement. The respiratory signals were extracted by analysing the shape of the histogram of these difference images: we calculated the first four central and non-central moments. However, only odd-order moments produced the desired breathing signal, as the even-order moments lacked information about the phase. Each of these curves was compared to a reference signal recorded by means of a pneumatic pillow. Given the similar correlation coefficients yielded by all of them, we selected the mean to implement our retrospective protocol. Respiratory phase bins were separated, reconstructed independently and included in a dynamic sequence, suitable for cine playback. We validated our method in five adult rat studies by comparing profiles drawn across the diaphragm dome, with and without retrospective respiratory gating. Results showed a sharper transition in the gated reconstruction, with an average slope improvement of 60.7%.
The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method
2015-01-01
Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838
Microparticles Produced by the Hydrogel Template Method for Sustained Drug Delivery
Lu, Ying; Sturek, Michael; Park, Kinam
2014-01-01
Polymeric microparticles have been used widely for sustained drug delivery. Current methods of microparticle production can be improved by making homogeneous particles in size and shape, increasing the drug loading, and controlling the initial burst release. In the current study, the hydrogel template method was used to produce homogeneous poly(lactide-co-glycolide) (PLGA) microparticles and to examine formulation and process-related parameters. Poly(vinyl alcohol) (PVA) was used to make hydrogel templates. The parameters examined include PVA molecular weight, type of PLGA (as characterized by lactide content, inherent viscosity), polymer concentration, drug concentration and composition of solvent system. Three model compounds studied were risperidone, methylprednisolone acetate and paclitaxel. The ability of the hydrogel template method to produce microparticles with good conformity to template was dependent on molecular weight of PVA and viscosity of the PLGA solution. Drug loading and encapsulation efficiency were found to be influenced by PLGA lactide content, polymer concentration and composition of the solvent system. The drug loading and encapsulation efficiency were 28.7% and 82% for risperidone, 31.5% and 90% for methylprednisolone acetate, and 32.2 % and 92 % for paclitaxel, respectively. For all three drugs, release was sustained for weeks, and the in vitro release profile of risperidone was comparable to that of microparticles prepared using the conventional emulsion method. The hydrogel template method provides a new approach of manipulating microparticles. PMID:24333903
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Primer and platform effects on 16S rRNA tag sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tremblay, Julien; Singh, Kanwar; Fern, Alison
Sequencing of 16S rRNA gene tags is a popular method for profiling and comparing microbial communities. The protocols and methods used, however, vary considerably with regard to amplification primers, sequencing primers, sequencing technologies; as well as quality filtering and clustering. How results are affected by these choices, and whether data produced with different protocols can be meaningfully compared, is often unknown. Here we compare results obtained using three different amplification primer sets (targeting V4, V6–V8, and V7–V8) and two sequencing technologies (454 pyrosequencing and Illumina MiSeq) using DNA from a mock community containing a known number of species as wellmore » as complex environmental samples whose PCR-independent profiles were estimated using shotgun sequencing. We find that paired-end MiSeq reads produce higher quality data and enabled the use of more aggressive quality control parameters over 454, resulting in a higher retention rate of high quality reads for downstream data analysis. While primer choice considerably influences quantitative abundance estimations, sequencing platform has relatively minor effects when matched primers are used. In conclusion, beta diversity metrics are surprisingly robust to both primer and sequencing platform biases.« less
Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V
2017-07-01
During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.
Primer and platform effects on 16S rRNA tag sequencing
Tremblay, Julien; Singh, Kanwar; Fern, Alison; ...
2015-08-04
Sequencing of 16S rRNA gene tags is a popular method for profiling and comparing microbial communities. The protocols and methods used, however, vary considerably with regard to amplification primers, sequencing primers, sequencing technologies; as well as quality filtering and clustering. How results are affected by these choices, and whether data produced with different protocols can be meaningfully compared, is often unknown. Here we compare results obtained using three different amplification primer sets (targeting V4, V6–V8, and V7–V8) and two sequencing technologies (454 pyrosequencing and Illumina MiSeq) using DNA from a mock community containing a known number of species as wellmore » as complex environmental samples whose PCR-independent profiles were estimated using shotgun sequencing. We find that paired-end MiSeq reads produce higher quality data and enabled the use of more aggressive quality control parameters over 454, resulting in a higher retention rate of high quality reads for downstream data analysis. While primer choice considerably influences quantitative abundance estimations, sequencing platform has relatively minor effects when matched primers are used. In conclusion, beta diversity metrics are surprisingly robust to both primer and sequencing platform biases.« less
de Sisternes, Luis; Jonna, Gowtham; Moss, Jason; Marmor, Michael F.; Leng, Theodore; Rubin, Daniel L.
2017-01-01
This work introduces and evaluates an automated intra-retinal segmentation method for spectral-domain optical coherence (SD-OCT) retinal images. While quantitative assessment of retinal features in SD-OCT data is important, manual segmentation is extremely time-consuming and subjective. We address challenges that have hindered prior automated methods, including poor performance with diseased retinas relative to healthy retinas, and data smoothing that obscures image features such as small retinal drusen. Our novel segmentation approach is based on the iterative adaptation of a weighted median process, wherein a three-dimensional weighting function is defined according to image intensity and gradient properties, and a set of smoothness constraints and pre-defined rules are considered. We compared the segmentation results for 9 segmented outlines associated with intra-retinal boundaries to those drawn by hand by two retinal specialists and to those produced by an independent state-of-the-art automated software tool in a set of 42 clinical images (from 14 patients). These images were obtained with a Zeiss Cirrus SD-OCT system, including healthy, early or intermediate AMD, and advanced AMD eyes. As a qualitative evaluation of accuracy, a highly experienced third independent reader blindly rated the quality of the outlines produced by each method. The accuracy and image detail of our method was superior in healthy and early or intermediate AMD eyes (98.15% and 97.78% of results not needing substantial editing) to the automated method we compared against. While the performance was not as good in advanced AMD (68.89%), it was still better than the manual outlines or the comparison method (which failed in such cases). We also tested our method’s performance on images acquired with a different SD-OCT manufacturer, collected from a large publicly available data set (114 healthy and 255 AMD eyes), and compared the data quantitatively to reference standard markings of the internal limiting membrane and inner boundary of retinal pigment epithelium, producing a mean unsigned positioning error of 6.04 ± 7.83µm (mean under 2 pixels). Our automated method should be applicable to data from different OCT manufacturers and offers detailed layer segmentations in healthy and AMD eyes. PMID:28663874
Negrete, Alejandro; Kotin, Robert M.
2007-01-01
The conventional methods for producing recombinant adeno-associated virus (rAAV) rely on transient transfection of adherent mammalian cells. To gain acceptance and achieve current good manufacturing process (cGMP) compliance, clinical grade rAAV production process should have the following qualities: simplicity, consistency, cost effectiveness, and scalability. Currently, the only viable method for producing rAAV in large-scale, e.g.≥1016 particles per production run, utilizes Baculovirus Expression Vectors (BEVs) and insect cells suspension cultures. The previously described rAAV production in 40 L culture using a stirred tank bioreactor requires special conditions for implementation and operation not available in all laboratories. Alternatives to producing rAAV in stirred-tank bioreactors are single-use, disposable bioreactors, e.g. Wave™. The disposable bags are purchased pre-sterilized thereby eliminating the need for end-user sterilization and also avoiding cleaning steps between production runs thus facilitating the production process. In this study, rAAV production in stirred tank and Wave™ bioreactors was compared. The working volumes were 10 L and 40 L for the stirred tank bioreactors and 5 L and 20 L for the Wave™ bioreactors. Comparable yields of rAAV, ~2e+13 particles per liter of cell culture were obtained in all volumes and configurations. These results demonstrate that producing rAAV in large scale using BEVs is reproducible, scalable, and independent of the bioreactor configuration. Keywords: adeno-associated vectors; large-scale production; stirred tank bioreactor; wave bioreactor; gene therapy. PMID:17606302
A Comparison of Alternating Current and Direct Current Electrospray Ionization for Mass Spectrometry
Sarver, Scott A.; Gartner, Carlos A.; Chetwani, Nishant; Go, David B.; Dovichi, Norman J.
2014-01-01
A series of studies comparing the performance of alternating current electrospray ionization (AC ESI) mass spectrometry (MS) and direct current electrospray ionization (DC ESI) MS has been conducted, exploring the absolute signal intensity and signal-to-background ratios produced by both methods using caffeine and a model peptide as targets. Because the high-voltage AC signal was more susceptible to generating gas discharges, the operating voltage range of AC ESI was significantly smaller than that for DC ESI, such that the absolute signal intensities produced by DC ESI at peak voltages were 1 - 2 orders of magnitude greater than those for AC ESI. Using an electronegative nebulizing gas, sulfur hexafluoride (SF6), instead of nitrogen (N2) increased the operating range of AC ESI by ~50%, but did not appreciably improve signal intensities. While DC ESI generated far greater signal intensities, both ionization methods produced comparable signal-to-background noise, with AC ESI spectra appearing qualitatively cleaner. A quantitative calibration analysis was performed for two analytes, caffeine and the peptide MRFA. AC ESI utilizing SF6 outperforms all other techniques for the detection of MRFA, producing chromatographic limits of detection nearly one order of magnitude lower than that of DC ESI utilizing N2, and one half that of DC ESI utilizing SF6. However, DC ESI outperforms AC ESI for the analysis of caffeine, indicating improvements in spectral quality may benefit certain compounds, or classes of compounds, on an individual basis. PMID:24464359
Cold dark matter. 1: The formation of dark halos
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.
Frømyr, Tomas-Roll; Bourgeaux-Goget, Marie; Hansen, Finn Knut
2015-05-01
A method has been developed to characterize the dispersion of multi-wall carbon nanotubes in water using a disc centrifuge for the detection of individual carbon nanotubes, residual aggregates, and contaminants. Carbon nanotubes produced by arc-discharge have been measured and compared with carbon nanotubes produced by chemical vapour deposition. Studies performed on both pristine (see text) arc-discharge nanotubes is rather strong and that high ultra-sound intensity is required to achieve complete dispersion of carbon nanotube bundles. The logarithm of the mode of the particle size distribution of the arc-discharge carbon nanotubes was found to be a linear function of the logarithm of the total ultrasonic energy input in the dispersion process.
Consolidation of lunar regolith: Microwave versus direct solar heating
NASA Technical Reports Server (NTRS)
Kunitzer, J.; Strenski, D. G.; Yankee, S. J.; Pletka, B. J.
1991-01-01
The production of construction materials on the lunar surface will require an appropriate fabrication technique. Two processing methods considered as being suitable for producing dense, consolidated products such as bricks are direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various size. The regolith was considered to be a mare basalt with an overall density of 60 pct. of theoretical. Densification was assumed to take place by vitrification since this process requires moderate amounts of energy and time while still producing dense products. Microwave heating was shown to be significantly faster compared to solar furnace heating for rapid production of realistic-size bricks.
A novel nano-Ni/SiO2 catalyst for hydrogen production from steam reforming of ethanol.
Wu, Chunfei; Williams, Paul T
2010-08-01
Catalytic steam reforming of ethanol has been regarded as a promising way to produce hydrogen. However, catalytic deactivation is a key problem in the process. In this paper, a novel nano-Ni/SiO2 catalyst was prepared by a simple sol-gel method and compared to catalysts prepared by an impregnation method in relation to the steam reforming ethanol process. Good Ni dispersion and high BET surface areas (>700 m2 g(-1)) were obtained for sol-gel catalysts, whereas only 1 m2 g(-1) surface area was obtained for the Ni/SiO2 impregnation catalyst. The results of catalytic steam reforming of ethanol showed that about twice of the hydrogen production was produced with the Ni/SiO2 catalyst prepared by sol-gel (around 0.2 g h(-1)) compared with that prepared by impregnation (around 0.1 g h(-1)). The analysis of the used catalysts showed that 10Ni/SiO2-B and 20Ni/SiO2-B presented the highest stability, while other catalysts were fragmented into small pieces after the reforming process, especially the catalysts prepared by impregnation. A novel catalyst has been produced that has been shown to be effective in the production of hydrogen from the steam reforming of ethanol.
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
Residual gravimetric method to measure nebulizer output.
Vecellio None, Laurent; Grimbert, Daniel; Bordenave, Joelle; Benoit, Guy; Furet, Yves; Fauroux, Brigitte; Boissinot, Eric; De Monte, Michele; Lemarié, Etienne; Diot, Patrice
2004-01-01
The aim of this study was to assess a residual gravimetric method based on weighing dry filters to measure the aerosol output of nebulizers. This residual gravimetric method was compared to assay methods based on spectrophotometric measurement of terbutaline (Bricanyl, Astra Zeneca, France), high-performance liquid chromatography (HPLC) measurement of tobramycin (Tobi, Chiron, U.S.A.), and electrochemical measurements of NaF (as defined by the European standard). Two breath-enhanced jet nebulizers, one standard jet nebulizer, and one ultrasonic nebulizer were tested. Output produced by the residual gravimetric method was calculated by weighing the filters both before and after aerosol collection and by filter drying corrected by the proportion of drug contained in total solute mass. Output produced by the electrochemical, spectrophotometric, and HPLC methods was determined after assaying the drug extraction filter. The results demonstrated a strong correlation between the residual gravimetric method (x axis) and assay methods (y axis) in terms of drug mass output (y = 1.00 x -0.02, r(2) = 0.99, n = 27). We conclude that a residual gravimetric method based on dry filters, when validated for a particular agent, is an accurate way of measuring aerosol output.
NASA Astrophysics Data System (ADS)
Ling, Sheryn Wong Shue; Latip, Jalifah; Hassan, Nurul Izzaty; Hasbullah, Siti Aishah
2018-04-01
An efficient and green method of synthesizing phthalide-fused indoline, 3-[(1,3,3-trimethylindolin-2-ylidene)methyl]isobenzofuran-1(3H)-one (3) has been developed by the coupling reaction of 1,3,3-trimethyl-2-methyleneindoline, 1 and phthalaldehydic acid, 2 under solvent-free domestic microwave irradiation. The compound was produced with an excellent yield (98 %) and at a shorter reaction time (5 min) as compared to the conventional method. Compound 3 was fully characterized by analytical and spectral methods. Preliminary binding study of 3 towards different types of metal cations was done by "naked eye" colorimetric detection and UV-vis spectrophotometer. Compound 3 exhibits good selectivity and sensitivity for Sn2+ compared to other metal cations.
Computer modeling the fatigue crack growth rate behavior of metals in corrosive environments
NASA Technical Reports Server (NTRS)
Richey, Edward, III; Wilson, Allen W.; Pope, Jonathan M.; Gangloff, Richard P.
1994-01-01
The objective of this task was to develop a method to digitize FCP (fatigue crack propagation) kinetics data, generally presented in terms of extensive da/dN-Delta K pairs, to produce a file for subsequent linear superposition or curve-fitting analysis. The method that was developed is specific to the Numonics 2400 Digitablet and is comparable to commercially available software products as Digimatic(sup TM 4). Experiments demonstrated that the errors introduced by the photocopying of literature data, and digitization, are small compared to those inherent in laboratory methods to characterize FCP in benign and aggressive environments. The digitizing procedure was employed to obtain fifteen crack growth rate data sets for several aerospace alloys in aggressive environments.
Antioxidant and Antiradical Activity of Coffee
Yashin, Alexander; Yashin, Yakov; Wang, Jing Yuan; Nemzer, Boris
2013-01-01
This review summarizes published information concerning the determination of antioxidant activity (AA) in coffee samples by various methods (ORAC, FRAP, TRAP, TEAC, etc.) in vitro and limited data of antiradical activity of coffee products in vitro and in vivo. Comparison is carried out of the AA of coffee Arabica and coffee Robusta roasted at different temperatures as well as by different roasting methods (microwave, convection, etc.). Data on the antiradical activity of coffee is provided. The antioxidant activity of coffee, tea, cocoa, and red wine is compared. At the end of this review, the total antioxidant content (TAC) of coffee samples from 21 coffee-producing countries as measured by an amperometric method is provided. The TAC of green and roasted coffee beans is also compared. PMID:26784461
Vegetative propagation of Cecropia obtusifolia (Cecropiaceae).
LaPierre, L M
2001-01-01
Cecropia is a relatively well-known and well-studied genus in the Neotropics. Methods for the successful propagation of C. obtusifolia Bertoloni, 1840 from cuttings and air layering are described, and the results of an experiment to test the effect of two auxins, naphthalene acetic acid (NAA) and indole butyric acid (IBA), on adventitious root production in cuttings are presented. In general, C. obtusifolia cuttings respond well to adventitious root production (58.3% of cuttings survived to root), but air layering was the better method (93% of cuttings survived to root). The concentration of auxins used resulted in an overall significantly lower quality of roots produced compared with cuttings without auxin treatment. Future experiments using Cecropia could benefit from the use of isogenic plants produced by vegetative propagation.
Using stepped anvils to make even insulation layers in laser-heated diamond-anvil cell samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Zhixue; Gu, Tingting; Dobrosavljevic, Vasilije
Here, we describe a method to make even insulation layers for high-pressure laser-heated diamond-anvil cell samples using stepped anvils. Moreover, the method works for both single-sided and double-sided laser heating using solid or fluid insulation. The stepped anvils are used as matched pairs or paired with a flat culet anvil to make gasket insulation layers and not actually used at high pressures; thus, their longevity is ensured. We also compare the radial temperature gradients and Soret diffusion of iron between self-insulating samples and samples produced with stepped anvils and find that less pronounced Soret diffusion occurs in samples with evenmore » insulation layers produced by stepped anvils.« less
Using stepped anvils to make even insulation layers in laser-heated diamond-anvil cell samples
Du, Zhixue; Gu, Tingting; Dobrosavljevic, Vasilije; ...
2015-09-01
Here, we describe a method to make even insulation layers for high-pressure laser-heated diamond-anvil cell samples using stepped anvils. Moreover, the method works for both single-sided and double-sided laser heating using solid or fluid insulation. The stepped anvils are used as matched pairs or paired with a flat culet anvil to make gasket insulation layers and not actually used at high pressures; thus, their longevity is ensured. We also compare the radial temperature gradients and Soret diffusion of iron between self-insulating samples and samples produced with stepped anvils and find that less pronounced Soret diffusion occurs in samples with evenmore » insulation layers produced by stepped anvils.« less
Pelletier, David L
2005-05-01
The US Food and Drug Administration's (FDA's) 1992 policy statement was developed in the context of critical gaps in scientific knowledge concerning the compositional effects of genetic transformation and severe limitations in methods for safety testing. FDA acknowledged that pleiotropy and insertional mutagenesis may cause unintended changes, but it was unknown whether this happens to a greater extent in genetic engineering compared with traditional breeding. Moreover, the agency was not able to identify methods by which producers could screen for unintended allergens and toxicants. Despite these uncertainties, FDA granted genetically engineered foods the presumption of GRAS (Generally Recognized As Safe) and recommended that producers use voluntary consultations before marketing them.
Additively Manufactured Metals in Oxygen Systems Project
NASA Technical Reports Server (NTRS)
Tylka, Jonathan
2015-01-01
Metals produced by additive manufacturing methods, such as Powder Bed Fusion Technology, are now mature enough to be considered for qualification in human spaceflight oxygen systems. The mechanical properties of metals produced through AM processes are being systematically studied. However, it is unknown whether AM metals in oxygen applications may present an increased risk of flammability or ignition as compared to wrought metals of the same metallurgical composition due to increased porosity. Per NASA-STD-6001B materials to be used in oxygen system applications shall be based on flammability and combustion test data, followed by a flammability assessment. Without systematic flammability and ignition testing in oxygen there is no credible method for NASA to accurately evaluate the risk of using AM metals in oxygen systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Albert F., E-mail: wagner@anl.gov; Dawes, Richard; Continetti, Robert E.
The measured H(D)OCO survival fractions of the photoelectron-photofragment coincidence experiments by the Continetti group are qualitatively reproduced by tunneling calculations to H(D) + CO{sub 2} on several recent ab initio potential energy surfaces for the HOCO system. The tunneling calculations involve effective one-dimensional barriers based on steepest descent paths computed on each potential energy surface. The resulting tunneling probabilities are converted into H(D)OCO survival fractions using a model developed by the Continetti group in which every oscillation of the H(D)-OCO stretch provides an opportunity to tunnel. Four different potential energy surfaces are examined with the best qualitative agreement with experimentmore » occurring for the PIP-NN surface based on UCCSD(T)-F12a/AVTZ electronic structure calculations and also a partial surface constructed for this study based on CASPT2/AVDZ electronic structure calculations. These two surfaces differ in barrier height by 1.6 kcal/mol but when matched at the saddle point have an almost identical shape along their reaction paths. The PIP surface is a less accurate fit to a smaller ab initio data set than that used for PIP-NN and its computed survival fractions are somewhat inferior to PIP-NN. The LTSH potential energy surface is the oldest surface examined and is qualitatively incompatible with experiment. This surface also has a small discontinuity that is easily repaired. On each surface, four different approximate tunneling methods are compared but only the small curvature tunneling method and the improved semiclassical transition state method produce useful results on all four surfaces. The results of these two methods are generally comparable and in qualitative agreement with experiment on the PIP-NN and CASPT2 surfaces. The original semiclassical transition state theory method produces qualitatively incorrect tunneling probabilities on all surfaces except the PIP. The Eckart tunneling method uses the least amount of information about the reaction path and produces too high a tunneling probability on PIP-NN surface, leading to survival fractions that peak at half their measured values.« less
Bondi, Mark W.; Edmonds, Emily C.; Jak, Amy J.; Clark, Lindsay R.; Delano-Wood, Lisa; McDonald, Carrie R.; Nation, Daniel A.; Libon, David J.; Au, Rhoda; Galasko, Douglas; Salmon, David P.
2014-01-01
We compared two methods of diagnosing mild cognitive impairment (MCI): conventional Petersen/Winblad criteria as operationalized by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an actuarial neuropsychological method put forward by Jak and Bondi designed to balance sensitivity and reliability. 1,150 ADNI participants were diagnosed at baseline as cognitively normal (CN) or MCI via ADNI criteria (MCI: n = 846; CN: n = 304) or Jak/Bondi criteria (MCI: n = 401; CN: n = 749), and the two MCI samples were submitted to cluster and discriminant function analyses. Resulting cluster groups were then compared and further examined for APOE allelic frequencies, cerebrospinal fluid (CSF) Alzheimer’s disease (AD) biomarker levels, and clinical outcomes. Results revealed that both criteria produced a mildly impaired Amnestic subtype and a more severely impaired Dysexecutive/Mixed subtype. The neuropsychological Jak/Bondi criteria uniquely yielded a third Impaired Language subtype, whereas conventional Petersen/Winblad ADNI criteria produced a third subtype comprising nearly one-third of the sample that performed within normal limits across the cognitive measures, suggesting this method’s susceptibility to false positive diagnoses. MCI participants diagnosed via neuropsychological criteria yielded dissociable cognitive phenotypes, significant CSF AD biomarker associations, more stable diagnoses, and identified greater percentages of participants who progressed to dementia than conventional MCI diagnostic criteria. Importantly, the actuarial neuropsychological method did not produce a subtype that performed within normal limits on the cognitive testing, unlike the conventional diagnostic method. Findings support the need for refinement of MCI diagnoses to incorporate more comprehensive neuropsychological methods, with resulting gains in empirical characterization of specific cognitive phenotypes, biomarker associations, stability of diagnoses, and prediction of progression. Refinement of MCI diagnostic methods may also yield gains in biomarker and clinical trial study findings because of improvements in sample compositions of ‘true positive’ cases and removal of ‘false positive’ cases. PMID:24844687
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
Reservoir property grids improve with geostatistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogt, J.
1993-09-01
Visualization software, reservoir simulators and many other E and P software applications need reservoir property grids as input. Using geostatistics, as compared to other gridding methods, to produce these grids leads to the best output from the software programs. For the purpose stated herein, geostatistics is simply two types of gridding methods. Mathematically, these methods are based on minimizing or duplicating certain statistical properties of the input data. One geostatical method, called kriging, is used when the highest possible point-by-point accuracy is desired. The other method, called conditional simulation, is used when one wants statistics and texture of the resultingmore » grid to be the same as for the input data. In the following discussion, each method is explained, compared to other gridding methods, and illustrated through example applications. Proper use of geostatistical data in flow simulations, use of geostatistical data for history matching, and situations where geostatistics has no significant advantage over other methods, also will be covered.« less
NASA Astrophysics Data System (ADS)
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States.
Yamana, Teresa K; Kandula, Sasikiran; Shaman, Jeffrey
2017-11-01
Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time.
Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States
Kandula, Sasikiran; Shaman, Jeffrey
2017-01-01
Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time. PMID:29107987
A life cycle approach to the management of household food waste - A Swedish full-scale case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstad, A., E-mail: anna.bernstad@chemeng.lth.se; Cour Jansen, J. la
2011-08-15
Research Highlights: > The comparison of three different methods for management of household food waste show that anaerobic digestion provides greater environmental benefits in relation to global warming potential, acidification and ozone depilation compared to incineration and composting of food waste. Use of produced biogas as car fuel provides larger environmental benefits compared to a use of biogas for heat and power production. > The use of produced digestate from the anaerobic digestion as substitution for chemical fertilizer on farmland provides avoidance of environmental burdens in the same ratio as the substitution of fossil fuels with produced biogas. > Sensitivitymore » analyses show that results are highly sensitive to assumptions regarding the environmental burdens connected to heat and energy supposedly substituted by the waste treatment. - Abstract: Environmental impacts from incineration, decentralised composting and centralised anaerobic digestion of solid organic household waste are compared using the EASEWASTE LCA-tool. The comparison is based on a full scale case study in southern Sweden and used input-data related to aspects such as source-separation behaviour, transport distances, etc. are site-specific. Results show that biological treatment methods - both anaerobic and aerobic, result in net avoidance of GHG-emissions, but give a larger contribution both to nutrient enrichment and acidification when compared to incineration. Results are to a high degree dependent on energy substitution and emissions during biological processes. It was seen that if it is assumed that produced biogas substitute electricity based on Danish coal power, this is preferable before use of biogas as car fuel. Use of biogas for Danish electricity substitution was also determined to be more beneficial compared to incineration of organic household waste. This is a result mainly of the use of plastic bags in the incineration alternative (compared to paper bags in the anaerobic) and the use of biofertiliser (digestate) from anaerobic treatment as substitution of chemical fertilisers used in an incineration alternative. Net impact related to GWP from the management chain varies from a contribution of 2.6 kg CO{sub 2}-eq/household and year if incineration is utilised, to an avoidance of 5.6 kg CO{sub 2}-eq/household and year if choosing anaerobic digestion and using produced biogas as car fuel. Impacts are often dependent on processes allocated far from the control of local decision-makers, indicating the importance of a holistic approach and extended collaboration between agents in the waste management chain.« less
NASA Astrophysics Data System (ADS)
Icli, Kerem Cagatay; Kocaoglu, Bahadir Can; Ozenbas, Macit
2018-01-01
Fluorine-doped tin dioxide (FTO) thin films were produced via conventional spray pyrolysis and ultrasonic spray pyrolysis (USP) methods using alcohol-based solutions. The prepared films were compared in terms of crystal structure, morphology, surface roughness, visible light transmittance, and electronic properties. Upon investigation of the grain structures and morphologies, the films prepared using ultrasonic spray method provided relatively larger grains and due to this condition, carrier mobilities of these films exhibited slightly higher values. Dye-sensitized solar cells and 10×10 cm modules were prepared using commercially available and USP-deposited FTO/glass substrates, and solar performances were compared. It is observed that there exists no remarkable efficiency difference for both cells and modules, where module efficiency of the USP-deposited FTO glass substrates is 3.06% compared to commercial substrate giving 2.85% under identical conditions. We demonstrated that USP deposition is a low cost and versatile method of depositing commercial quality FTO thin films on large substrates employed in large area dye-sensitized solar modules or other thin film technologies.
Chuang, Ya-Hui; Zhang, Yingjie; Zhang, Wei; Boyd, Stephen A; Li, Hui
2015-07-24
Land application of biosolids and irrigation with reclaimed water in agricultural production could result in accumulation of pharmaceuticals in vegetable produce. To better assess the potential human health impact from long-term consumption of pharmaceutical-contaminated vegetables, it is important to accurately quantify the amount of pharmaceuticals accumulated in vegetables. In this study, a quick, easy, cheap, effective, rugged and safe (QuEChERS) method was developed and optimized to extract multiple classes of pharmaceuticals from vegetables, which were subsequently quantified by liquid chromatography coupled to tandem mass spectrometry. For the eleven target pharmaceuticals in celery and lettuce, the extraction recovery of the QuEChERS method ranged from 70.1 to 118.6% with relative standard deviation <20%, and the method detection limit was achieved at the levels of nanograms of pharmaceuticals per gram of vegetables. The results revealed that the performance of the QuEChERS method was comparable to, or better than that of accelerated solvent extraction (ASE) method for extraction of pharmaceuticals from plants. The two optimized extraction methods were applied to quantify the uptake of pharmaceuticals by celery and lettuce growing hydroponically. The results showed that all the eleven target pharmaceuticals could be absorbed by the vegetables from water. Compared to the ASE method, the QuEChERS method offers the advantages of short time and reduced costs of sample preparation, and less amount of organic solvents used. The established QuEChERS method could be used to determine the accumulation of multiple classes of pharmaceutical residues in vegetables and other plants, which is needed to evaluate the quality and safety of agricultural produce consumed by humans. Copyright © 2015 Elsevier B.V. All rights reserved.
Davoren, Jon; Vanek, Daniel; Konjhodzić, Rijad; Crews, John; Huffine, Edwin; Parsons, Thomas J.
2007-01-01
Aim To quantitatively compare a silica extraction method with a commonly used phenol/chloroform extraction method for DNA analysis of specimens exhumed from mass graves. Methods DNA was extracted from twenty randomly chosen femur samples, using the International Commission on Missing Persons (ICMP) silica method, based on Qiagen Blood Maxi Kit, and compared with the DNA extracted by the standard phenol/chloroform-based method. The efficacy of extraction methods was compared by real time polymerase chain reaction (PCR) to measure DNA quantity and the presence of inhibitors and by amplification with the PowerPlex 16 (PP16) multiplex nuclear short tandem repeat (STR) kit. Results DNA quantification results showed that the silica-based method extracted on average 1.94 ng of DNA per gram of bone (range 0.25-9.58 ng/g), compared with only 0.68 ng/g by the organic method extracted (range 0.0016-4.4880 ng/g). Inhibition tests showed that there were on average significantly lower levels of PCR inhibitors in DNA isolated by the organic method. When amplified with PP16, all samples extracted by silica-based method produced 16 full loci profiles, while only 75% of the DNA extracts obtained by organic technique amplified 16 loci profiles. Conclusions The silica-based extraction method showed better results in nuclear STR typing from degraded bone samples than a commonly used phenol/chloroform method. PMID:17696302
The effect of torrefaction on the chemistry of fast-pyrolysis bio-oil.
Meng, Jiajia; Park, Junyeong; Tilotta, David; Park, Sunkyu
2012-05-01
Fast pyrolysis was performed on torrefied loblolly pine and the collected bio-oils were analyzed to compare the effect of the torrefaction treatment on their quality. The results of the analyses show that bio-oils produced from torrefied wood have improved oxygen-to-carbon ratios compared to those from the original wood with the penalty of a decrease in bio-oil yield. The extent of this improvement depends on the torrefaction severity. Based on the GC/MS analysis of the pyrolysis bio-oils, bio-oils produced from torrefied biomass show different compositions compared to that from the original wood. Specifically, the former becomes more concentrated in pyrolytic lignin with less water content than the latter. It was considered that torrefaction could be a potential upgrading method to improve the quality of bio-oil, which might be a useful feedstock for phenolic-based chemicals. Copyright © 2012 Elsevier Ltd. All rights reserved.
The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter
NASA Technical Reports Server (NTRS)
Townsend, Barbara K.
1986-01-01
A control-system design method, Quadratic Optimal Cooperative Control Synthesis (CCS), is applied to the design of a Stability and Control Augmentation Systems (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design model, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing Vertol CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and Linear Quadratic Regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.
Laser notching ceramics for reliable fracture toughness testing
Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...
2015-09-19
A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less
Comparing ensemble learning methods based on decision tree classifiers for protein fold recognition.
Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi
2014-01-01
In this paper, some methods for ensemble learning of protein fold recognition based on a decision tree (DT) are compared and contrasted against each other over three datasets taken from the literature. According to previously reported studies, the features of the datasets are divided into some groups. Then, for each of these groups, three ensemble classifiers, namely, random forest, rotation forest and AdaBoost.M1 are employed. Also, some fusion methods are introduced for combining the ensemble classifiers obtained in the previous step. After this step, three classifiers are produced based on the combination of classifiers of types random forest, rotation forest and AdaBoost.M1. Finally, the three different classifiers achieved are combined to make an overall classifier. Experimental results show that the overall classifier obtained by the genetic algorithm (GA) weighting fusion method, is the best one in comparison to previously applied methods in terms of classification accuracy.
Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.
Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae
2017-06-26
Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
The Effect of Initial Knee Angle on Concentric-Only Squat Jump Performance
ERIC Educational Resources Information Center
Mitchell, Lachlan J.; Argus, Christos K.; Taylor, Kristie-Lee; Sheppard, Jeremy M.; Chapman, Dale W.
2017-01-01
Purpose: There is uncertainty as to which knee angle during a squat jump (SJ) produces maximal jump performance. Importantly, understanding this information will aid in determining appropriate ratios for assessment and monitoring of the explosive characteristics of athletes. Method: This study compared SJ performance across different knee…
A Comparison of Three Methods to Measure Percent Body Fat on Mentally Retarded Adults.
ERIC Educational Resources Information Center
Burkett, Lee N.; And Others
1994-01-01
Reports a study that compared three measures for determining percent body fat in mentally retarded adults (multiple skinfolds and circumference measurements, Infrared Interactance, and Bioelectrical Impedance). Results indicated the Bioelectrical Impedance Analyzer and Infrared Interactance Analyzer produced values for percent body fat that were…
A Contextualized, Differential Sequence Mining Method to Derive Students' Learning Behavior Patterns
ERIC Educational Resources Information Center
Kinnebrew, John S.; Loretz, Kirk M.; Biswas, Gautam
2013-01-01
Computer-based learning environments can produce a wealth of data on student learning interactions. This paper presents an exploratory data mining methodology for assessing and comparing students' learning behaviors from these interaction traces. The core algorithm employs a novel combination of sequence mining techniques to identify deferentially…
USDA-ARS?s Scientific Manuscript database
A new method to refine existing dietary supplements for improving production of the yellow mealworm, Tenebrio molitor L. (Coleoptera: Tenebrionidae), was tested. Self selected ratios of 6 dietary ingredients by T. molitor larvae were used to produce a dietary supplement. This supplement was compared...
NASA Astrophysics Data System (ADS)
Masi, Maria Gabriella; Peretto, Lorenzo; Rovati, Luigi; Ansari, Rafat R.
2010-02-01
Light flickering at a rate of 4- 20 cycles per second can produce unpleasant reactions such as nausea and vertigo. In this paper, the possibility of achieving an objective evaluation/prediction of the physiological effects induced by flicker is investigated using a new imaging method based on the pupil size determination. This method is also compared with the blood flow analysis in the choroid.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
Bedenić, B; Boras, A
2001-01-01
The plasmid-mediated extended-spectrum beta-lactamases (ESBL) confer resistance to oxymino-cephalosporins, such as cefotaxime, ceftazidime, and ceftriaxone and to monobactams such as aztreonam. It is well known fact that ESBL producing bacteria exhibit a pronounced inoculum effect against broad spectrum cephalosporins like ceftazidime, cefotaxime, ceftriaxone and cefoperazone. The aim of this investigation was to determine the effect of inoculum size on the sensitivity and specificity of double-disk synergy test (DDST) which is the test most frequently used for detection of ESBLs, in comparison with other two methods (determination of ceftazidime MIC with and without clavulanate and inhibitor potentiated disk-diffusion test) which are seldom used in clinical laboratories. The experiments were performed on a set of K. pneumoniae strains with previously characterized beta-lactamases which comprise: 10 SHV-5 beta-lactamase producing K. pneumoniae, 20 SHV-2 + 1 SHV 2a beta-lactamase producing K. pneumoniae, 7 SHV-12 beta-lactamase producing K. pneumoniae, 39 putative SHV ESBL producing K. pneumoniae and 26 K. pneumoniae isolates highly susceptible to ceftazidime according to Kirby-Bauer disk-diffusion method and thus considered to be ESBL negative. According to the results of this investigation, increase in inoculum size affected more significantly the sensitivity of DDST than of other two methods. The sensitivity of the DDST was lower when a higher inoculum size of 10(8) CFU/ml was applied, in distinction from other two methods (MIC determination and inhibitor potentiated disk-diffusion test) which retained high sensitivity regardless of the density of bacterial suspension. On the other hand, DDST displayed higher specificity compared to other two methods regardless of the inoculum size. This investigation found that DDST is a reliable method but it is important to standardize the inoculum size.
Bio-electricity Generation using Jatropha Oil Seed Cake.
Raheman, Hifjur; Padhee, Debasish
2016-01-01
The review of patents reveals that Handling of Jatropha seed cake after extraction of oil is essential as it contains toxic materials which create environmental pollution. The goal of this work is complete utilisation of Jatropha seeds. For this purpose, Jatropha oil was used for producing biodiesel and the byproduct Jatropha seed cake was gasified to obtain producer gas. Both biodiesel and producer gas were used to generate electricity. To achieve this, a system comprising gasifier, briquetting machine, diesel engine and generator was developed. Biodiesel was produced successfully using the method patented for biodiesel production and briquettes of Jatropha seed cake were made using a vertical extruding machine. Producer gas was obtained by gasifying these briquettes in a downdraft gasifier. A diesel engine was then run in dual fuel mode with biodiesel and producer gas instead of only diesel. Electricity was generated by coupling it to a generator. The cost of producing kilowatthour of electricity with biodiesel and diesel in dual fuel mode with producer gas was found to be 0.84 $ and 0.75 $, respectively as compared to 0.69 $ and 0.5 $ for the same fuels in single fuel mode resulting in up to 48 % saving of pilot fuel. Compared to singlefuel mode, there was 25-32 % reduction in system and brake thermal efficiency along with significantly lower NOx, higher CO and CO2 emissions when the bio-electricity generating system was operated in dual fuel mode. Overall, the developed system could produce electricity successfully by completely uti- lising Jatropha seeds without leaving any seed cake to cause environmental pollution.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
Comparison Through Image Analysis Between Al Foams Produced Using Two Different Methods
NASA Astrophysics Data System (ADS)
Boschetto, A.; Campana, F.; Pilone, D.
2014-02-01
Several methods are available for making metal foams. They allow to tailor their mechanical, thermal, acoustic, and electrical properties for specific applications by varying the relative density as well as the cell size and morphology. Foams have a very heterogeneous structure so that their properties may show a large scatter. In this paper, an aluminum foam produced by means of foaming of powder compacts and another one prepared via the infiltration process were analyzed and compared. Image analysis has been used as a useful tool to determine size, morphology, and distribution of cells in both foams and to correlate cell morphology with the considered manufacturing process. The results highlighted that cell size and morphology are strictly dependent upon the manufacturing method. This paper shows how some standard 2D morphological indicators may be usefully adopted to characterize foams whose structure derives from the specific manufacturing process.
Physical and sensory quality of Java Arabica green coffee beans
NASA Astrophysics Data System (ADS)
Sunarharum, W. B.; Yuwono, S. S.; Pangestu, N. B. S. W.; Nadhiroh, H.
2018-03-01
Demand on high quality coffee for consumption is continually increasing not only in the consuming countries (importers) but also in the producing countries (exporters). Coffee quality could be affected by several factors from farm to cup including the post-harvest processing methods. This research aimed to investigate the influence of different post-harvest processing methods on physical and sensory quality of Java Arabica green coffee beans. The two factors being evaluated were three different post-harvest processing methods to produce green coffee beans (natural/dry, semi-washed and fully-washed processing) under sun drying. Physical quality evaluation was based on The Indonesian National Standard (SNI 01-2907-2008) while sensory quality was evaluated by five expert judges. The result shows that less defects observed in wet processed coffee as compared to the dry processing. The mechanical drying was also proven to yield a higher quality green coffee beans and minimise losses.
Antioxidant capacity and phenolic acids of virgin coconut oil.
Marina, A M; Man, Y B Che; Nazimah, S A H; Amin, I
2009-01-01
The antioxidant properties of virgin coconut oil produced through chilling and fermentation were investigated and compared with refined, bleached and deodorized coconut oil. Virgin coconut oil showed better antioxidant capacity than refined, bleached and deodorized coconut oil. The virgin coconut oil produced through the fermentation method had the strongest scavenging effect on 1,1-diphenyl-2-picrylhydrazyl and the highest antioxidant activity based on the beta-carotene-linoleate bleaching method. However, virgin coconut oil obtained through the chilling method had the highest reducing power. The major phenolic acids detected were ferulic acid and p-coumaric acid. Very high correlations were found between the total phenolic content and scavenging activity (r=0.91), and between the total phenolic content and reducing power (r=0.96). There was also a high correlation between total phenolic acids and beta-carotene bleaching activity. The study indicated that the contribution of antioxidant capacity in virgin coconut oil could be due to phenolic compounds.
Development of a test protocol for evaluating EVA glove performance
NASA Technical Reports Server (NTRS)
Hinman, Elaine M.
1992-01-01
Testing gloved hand performance involves work from several disciplines. Evaluations performed in the course of reenabling a disabled hand, designing a robotic end effector or master controller, or hard-suit design have all yielded relevant information, and, in most cases, produced performance test methods. Most times, these test methods have been primarily oriented toward their parent discipline. For space operations, a comparative test which would provide a way to quantify pressure glove and end effector performance would be useful in dividing tasks between humans and robots. Such a test would have to rely heavily on sensored measurement, as opposed to questionnaires, to produce relevant data. However, at some point human preference would have to be taken into account. This paper presents a methodology for evaluating gloved hand performance which attempts to respond to these issues. Glove testing of a prototype glove design using this method is described.
Uranium plasma emission at gas-core reaction conditions
NASA Technical Reports Server (NTRS)
Williams, M. D.; Jalufka, N. W.; Hohl, F.; Lee, J. H.
1976-01-01
The results of uranium plasma emission produced by two methods are reported. For the first method a ruby laser was focused on the surface of a pure U-238 sample to create a plasma plume with a peak plasma density of about 10 to the 20th power/cu cm and a temperature of about 38,600 K. The absolute intensity of the emitted radiation, covering the range from 300 to 7000 A was measured. For the second method, the uranium plasma was produced in a 20 kilovolt, 25 kilojoule plasma-focus device. The 2.5 MeV neutrons from the D-D reaction in the plasma focus are moderated by polyethylene and induce fissions in the U-235. Spectra of both uranium plasmas were obtained over the range from 30 to 9000 A. Because of the low fission yield the energy input due to fissions is very small compared to the total energy in the plasma.
Method for nanomachining high aspect ratio structures
Yun, Wenbing; Spence, John; Padmore, Howard A.; MacDowell, Alastair A.; Howells, Malcolm R.
2004-11-09
A nanomachining method for producing high-aspect ratio precise nanostructures. The method begins by irradiating a wafer with an energetic charged-particle beam. Next, a layer of patterning material is deposited on one side of the wafer and a layer of etch stop or metal plating base is coated on the other side of the wafer. A desired pattern is generated in the patterning material on the top surface of the irradiated wafer using conventional electron-beam lithography techniques. Lastly, the wafer is placed in an appropriate chemical solution that produces a directional etch of the wafer only in the area from which the resist has been removed by the patterning process. The high mechanical strength of the wafer materials compared to the organic resists used in conventional lithography techniques with allows the transfer of the precise patterns into structures with aspect ratios much larger than those previously achievable.
Wet formation and structural characterization of quasi-hexagonal monolayers.
Batys, Piotr; Weroński, Paweł; Nosek, Magdalena
2016-01-01
We have presented a simple and efficient method for producing dense particle monolayers with controlled surface coverage. The method is based on particle sedimentation, manipulation of the particle-substrate electrostatic interaction, and gentle mechanical vibration of the system. It allows for obtaining quasi-hexagonal structures under wet conditions. Using this method, we have produced a monolayer of 3 μm silica particles on a glassy carbon substrate. By optical microscopy, we have determined the coordinates of the particles and surface coverage of the obtained structure to be 0.82. We have characterized the monolayer structure by means of the pair-correlation function and power spectrum. We have also compared the results with those for a 2D hexagonal monolayer and monolayer generated by random sequential adsorption at the coverage 0.50. We have found the surface fractal dimension to be 2.5, independently of the monolayer surface coverage. Copyright © 2015 Elsevier Inc. All rights reserved.
Multilaboratory evaluation of methods for detecting enteric viruses in soils.
Hurst, C J; Schaub, S A; Sobsey, M D; Farrah, S R; Gerba, C P; Rose, J B; Goyal, S M; Larkin, E P; Sullivan, R; Tierney, J T
1991-01-01
Two candidate methods for the recovery and detection of viruses in soil were subjected to round robin comparative testing by members of the American Society for Testing and Materials D19:24:04:04 Subcommittee Task Group. Selection of the methods, designated "Berg" and "Goyal," was based on results of an initial screening which indicated that both met basic criteria considered essential by the task group. Both methods utilized beef extract solutions to achieve desorption and recovery of viruses from representative soils: a fine sand soil, an organic muck soil, a sandy loam soil, and a clay loam soil. One of the two methods, Goyal, also used a secondary concentration of resulting soil eluants via low-pH organic flocculation to achieve a smaller final assay volume. Evaluation of the two methods was simultaneously performed in replicate by nine different laboratories. Each of the produced samples was divided into portions, and these were respectively subjected to quantitative viral plaque assay by both the individual, termed independent, laboratory which had done the soil processing and a single common reference laboratory, using a single cell line and passage level. The Berg method seemed to produce slightly higher virus recovery values; however, the differences in virus assay titers for samples produced by the two methods were not statistically significant (P less than or equal to 0.05) for any one of the four soils. Despite this lack of a method effect, there was a statistically significant laboratory effect exhibited by assay titers from the independent versus reference laboratories for two of the soils, sandy loam and clay loam. PMID:1849712
Multilaboratory evaluation of methods for detecting enteric viruses in soils.
Hurst, C J; Schaub, S A; Sobsey, M D; Farrah, S R; Gerba, C P; Rose, J B; Goyal, S M; Larkin, E P; Sullivan, R; Tierney, J T
1991-02-01
Two candidate methods for the recovery and detection of viruses in soil were subjected to round robin comparative testing by members of the American Society for Testing and Materials D19:24:04:04 Subcommittee Task Group. Selection of the methods, designated "Berg" and "Goyal," was based on results of an initial screening which indicated that both met basic criteria considered essential by the task group. Both methods utilized beef extract solutions to achieve desorption and recovery of viruses from representative soils: a fine sand soil, an organic muck soil, a sandy loam soil, and a clay loam soil. One of the two methods, Goyal, also used a secondary concentration of resulting soil eluants via low-pH organic flocculation to achieve a smaller final assay volume. Evaluation of the two methods was simultaneously performed in replicate by nine different laboratories. Each of the produced samples was divided into portions, and these were respectively subjected to quantitative viral plaque assay by both the individual, termed independent, laboratory which had done the soil processing and a single common reference laboratory, using a single cell line and passage level. The Berg method seemed to produce slightly higher virus recovery values; however, the differences in virus assay titers for samples produced by the two methods were not statistically significant (P less than or equal to 0.05) for any one of the four soils. Despite this lack of a method effect, there was a statistically significant laboratory effect exhibited by assay titers from the independent versus reference laboratories for two of the soils, sandy loam and clay loam.
Xia, Meng-lei; Wang, Lan; Yang, Zhi-xia; Chen, Hong-zhang
2016-04-01
This work proposed a new method which applied image processing and support vector machine (SVM) for screening of mold strains. Taking Monascus as example, morphological characteristics of Monascus colony were quantified by image processing. And the association between the characteristics and pigment production capability was determined by SVM. On this basis, a highly automated screening strategy was achieved. The accuracy of the proposed strategy is 80.6 %, which is compatible with the existing methods (81.1 % for microplate and 85.4 % for flask). Meanwhile, the screening of 500 colonies only takes 20-30 min, which is the highest rate among all published results. By applying this automated method, 13 strains with high-predicted production were obtained and the best one produced as 2.8-fold (226 U/mL) of pigment and 1.9-fold (51 mg/L) of lovastatin compared with the parent strain. The current study provides us with an effective and promising method for strain improvement.
Tugwell, Peter; Pottie, Kevin; Welch, Vivian; Ueffing, Erin; Chambers, Andrea; Feightner, John
2011-01-01
Background: This article describes the evidence review and guideline development method developed for the Clinical Preventive Guidelines for Immigrants and Refugees in Canada by the Canadian Collaboration for Immigrant and Refugee Health Guideline Committee. Methods: The Appraisal of Guidelines for Research and Evaluation (AGREE) best-practice framework was combined with the recently developed Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to produce evidence-based clinical guidelines for immigrants and refugees in Canada. Results: A systematic approach was designed to produce the evidence reviews and apply the GRADE approach, including building on evidence from previous systematic reviews, searching for and comparing evidence between general and specific immigrant populations, and applying the GRADE criteria for making recommendations. This method was used for priority health conditions that had been selected by practitioners caring for immigrants and refugees in Canada. Interpretation: This article outlines the 14-step method that was defined to standardize the guideline development process for each priority health condition. PMID:20573711
Amin, A S
2001-03-01
A fairly sensitive, simple and rapid spectrophotometric method for the determination of some beta-lactam antibiotics, namely ampicillin (Amp), amoxycillin (Amox), 6-aminopenicillanic acid (6APA), cloxacillin (Clox), dicloxacillin (Diclox) and flucloxacillin sodium (Fluclox) in bulk samples and in pharmaceutical dosage forms is described. The proposed method involves the use of pyrocatechol violet as a chromogenic reagent. These drugs produce a reddish brown coloured ion pair with absorption maximum at 604, 641, 645, 604, 649 and 641 nm for Amp, Amox, 6APA, Clox, Diclox and Flucolx, respectively. The colours produced obey Beer's law and are suitable for the quantitative determination of the named compounds. The optimization of different experimental conditions is described. The molar ratio of the ion pairs was established and a proposal for the reaction pathway is given. The procedure described was applied successfully to determine the examined drugs in dosage forms and the results obtained were comparable to those obtained with the official methods.
Aerosol Production from Charbroiled and Wet-Fried Meats
NASA Astrophysics Data System (ADS)
Niedziela, R. F.; Blanc, L. E.
2012-12-01
Previous work in our laboratory focused on the chemical and optical characterization of aerosols produced during the dry-frying of different meat samples. This method yielded a complex ensemble of particles composed of water and long-chain fatty acids with the latter dominated by oleic, stearic, and palmitic acids. The present study examines how wet-frying and charbroiling cooking methods affect the physical and chemical properties of their derived aerosols. Samples of ground beef, salmon, chicken, and pork were subject to both cooking methods in the laboratory, with their respective aerosols swept into a laminar flow cell where they were optically analyzed in the mid-infrared and collected through a gas chromatography probe for chemical characterization. This presentation will compare and contrast the nature of the aerosols generated in each cooking method, particularly those produced during charbroiling which exposes the samples, and their drippings, to significantly higher temperatures. Characterization of such cooking-related aerosols is important because of the potential impact of these particles on air quality, particularly in urban areas.
Construction of the Second Quito Astrolabe Catalogue
NASA Astrophysics Data System (ADS)
Kolesnik, Y. B.
1994-03-01
A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.
Excited-State Effective Masses in Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Fleming, Saul Cohen, Huey-Wen Lin
2009-10-01
We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.
A new automated NaCl based robust method for routine production of gallium-68 labeled peptides
Schultz, Michael K.; Mueller, Dirk; Baum, Richard P.; Watkins, G. Leonard; Breeman, Wouter A. P.
2017-01-01
A new NaCl based method for preparation of gallium-68 labeled radiopharmaceuticals has been adapted for use with an automated gallium-68 generator system. The method was evaluated based on 56 preparations of [68Ga]DOTATOC and compared to a similar acetone-based approach. Advantages of the new NaCl approach include reduced preparation time (< 15 min) and removal of organic solvents. The method produces high peptide-bound % (> 97%), and specific activity (> 40 MBq nmole−1 [68Ga]DOTATOC) and is well-suited for clinical production of radiopharmaceuticals. PMID:23026223
Development of a method of alignment between various SOLAR MAXIMUM MISSION experiments
NASA Technical Reports Server (NTRS)
1977-01-01
Results of an engineering study of the methods of alignment between various experiments for the solar maximum mission are described. The configuration studied consists of the instruments, mounts and instrument support platform located within the experiment module. Hardware design, fabrication methods and alignment techniques were studied with regard to optimizing the coalignment between the experiments and the fine sun sensor. The proposed hardware design was reviewed with regard to loads, stress, thermal distortion, alignment error budgets, fabrication techniques, alignment techniques and producibility. Methods of achieving comparable alignment accuracies on previous projects were also reviewed.
Binet, Rachel; Deer, Deanne M; Uhlfelder, Samantha J
2014-06-01
Faster detection of contaminated foods can prevent adulterated foods from being consumed and minimize the risk of an outbreak of foodborne illness. A sensitive molecular detection method is especially important for Shigella because ingestion of as few as 10 of these bacterial pathogens can cause disease. The objectives of this study were to compare the ability of four DNA extraction methods to detect Shigella in six types of produce, post-enrichment, and to evaluate a new and rapid conventional multiplex assay that targets the Shigella ipaH, virB and mxiC virulence genes. This assay can detect less than two Shigella cells in pure culture, even when the pathogen is mixed with background microflora, and it can also differentiate natural Shigella strains from a control strain and eliminate false positive results due to accidental laboratory contamination. The four DNA extraction methods (boiling, PrepMan Ultra [Applied Biosystems], InstaGene Matrix [Bio-Rad], DNeasy Tissue kit [Qiagen]) detected 1.6 × 10(3)Shigella CFU/ml post-enrichment, requiring ∼18 doublings to one cell in 25 g of produce pre-enrichment. Lower sensitivity was obtained, depending on produce type and extraction method. The InstaGene Matrix was the most consistent and sensitive and the multiplex assay accurately detected Shigella in less than 90 min, outperforming, to the best of our knowledge, molecular assays currently in place for this pathogen. Published by Elsevier Ltd.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
Yang, Gang; Zhao, Yaping; Zhang, Yongtai; Dang, Beilei; Liu, Ying; Feng, Nianping
2015-01-01
The aim of this investigation was to develop a procedure to improve the dissolution and bioavailability of silymarin (SM) by using bile salt-containing liposomes that were prepared by supercritical fluid technology (ie, solution-enhanced dispersion by supercritical fluids [SEDS]). The process for the preparation of SM-loaded liposomes containing a bile salt (SM-Lip-SEDS) was optimized using a central composite design of response surface methodology with the ratio of SM to phospholipids (w/w), flow rate of solution (mL/min), and pressure (MPa) as independent variables. Particle size, entrapment efficiency (EE), and drug loading (DL) were dependent variables for optimization of the process and formulation variables. The particle size, zeta potential, EE, and DL of the optimized SM-Lip-SEDS were 160.5 nm, −62.3 mV, 91.4%, and 4.73%, respectively. Two other methods to produce SM liposomes were compared to the SEDS method. The liposomes obtained by the SEDS method exhibited the highest EE and DL, smallest particle size, and best stability compared to liposomes produced by the thin-film dispersion and reversed-phase evaporation methods. Compared to the SM powder, SM-Lip-SEDS showed increased in vitro drug release. The in vivo AUC0−t of SM-Lip-SEDS was 4.8-fold higher than that of the SM powder. These results illustrate that liposomes containing a bile salt can be used to enhance the oral bioavailability of SM and that supercritical fluid technology is suitable for the preparation of liposomes. PMID:26543366
Multichannel-Hadamard calibration of high-order adaptive optics systems.
Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai
2014-06-02
we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.
Multi-view 3D echocardiography compounding based on feature consistency
NASA Astrophysics Data System (ADS)
Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.
2011-09-01
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.
Antihypertensive potential of bioactive hydrolysate from edible bird's nest
NASA Astrophysics Data System (ADS)
Ramachandran, Ravisangkar; Babji, Abdul Salam; Sani, Norrakiah Abdullah
2018-04-01
The aim of this study is to determine and compare the proximate composition, the degree of hydrolysis (DH) and the antihypertensive activity of edible bird's nest (EBN) hydrolysates of two different drying methods. Four types of enzymes (alcalase, bromelain, pancreatin and papain) were used in this study and with different hydrolysis time (30, 60, 90, 120, 180 and 240 min). The highest DH for alcalase (79.48 - 84.09%), pancreatine (77.10 - 80.45%) and papain (82.33%) for EBN hydrolysates was produced with alcalase treatment at 60 - 90 min, pancreatine treatment at 30 - 90 min and papain treatment at 90 min. Bromelain generated hydrolysates showed low DH. EBN hydrolysed using alcalase, pancreatin and papain have significantly higher protein content compared to raw EBN and the moisture content of all hydrolysates treatments was significantly lower compared to raw EBN. For antihypertensive assay, freeze dried EBN hydrolysates have higher antihypertensive activity compared to spray dried hydrolysates. The highest antihypertensive activity for freeze dried samples was produced by alcalase, bromelain and pancreatin and in the range of 80.22 - 86.97%. Meanwhile, papain proved to be less effective in producing hydrolysate with antihypertensive ability. In conclusion, EBN hydrolysate prepared by alcalase, bromelain and pancreatin could be classified as a functional food as it showed significant antihypertensive activity.
A Comparison of Anthropogenic Carbon Dioxide Emissions Datasets: UND and CDIAC
NASA Astrophysics Data System (ADS)
Gregg, J. S.; Andres, R. J.
2005-05-01
Using data from the Department of Energy's Energy Information Administration (EIA), a technique is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels for each state in the union. This technique employs monthly sales data to estimate the relative monthly proportions of the total annual carbon dioxide emissions from fossil-fuel use for all states in the union. The University of North Dakota (UND) results are compared to those published by Carbon Dioxide Information Analysis Center (CDIAC) at the Oak Ridge National Laboratory (ORNL). Recently, annual emissions per U.S. state (Blasing, Broniak, Marland, 2004a) as well as monthly CO2 emissions for the United States (Blasing, Broniak, Marland, 2004b) have been added to the CDIAC website. To determine the success of this technique, the individual state results are compared to the annual state totals calculated by CDIAC. In addition, the monthly country totals are compared with those produced by CDIAC. In general, the UND technique produces estimates that are consistent with those available on the CDIAC Trends website. Comparing the results from these two methods permits an improved understanding of the strengths and shortcomings of both estimation techniques. The primary advantages of the UND approach are its ease of implementation, the improved spatial and temporal resolution it can produce, and its universal applicability.
A modified ion-selective electrode method for measurement of chloride in sweat.
Finley, P R; Dye, J A; Lichti, D A; Byers, J M; Williams, R J
1978-06-01
A modified method of analysis of sweat chloride concentration with an ion-selective electrode is presented. The original method of sweat chloride analysis proposed by the Orion Research Corporation (Cambridge, Massachusetts 02139) is inadequate because it produces erratic and misleading results. The modified method was compared with the reference quantitative method of Gibson and Cooke. In the modified method, individual electrode pads are cut and placed in the electrodes rather than using the pads supplied by the company; pilocarpine nitrate (2,000 mg/l) is used in place of pilocarpine HCl (640 mg/l); sodium bicarbonate as the weak electrolyte is used instead of K2SO4. A 10-minute period for sweat accumulation is employed rather than a zero-time collection as in the original Orion method. The modification has been studied for reproducibility in individuals, reproducibility between right and left arm in individuals; it has been compared extensively with the quantitative method of Gibson and Cooke, both in normal individuals and in patients with cystic fibrosis. There is excellent agreement between the modified method and the quantitative reference method. There appears to be a slight bias toward higher concentrations of chloride from the right arm compared with the left arm, but this difference is not medically significant.
Security Analysis and Improvements to the PsychoPass Method
2013-01-01
Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458
Theory and Practice of Lineage Tracing.
Hsu, Ya-Chieh
2015-11-01
Lineage tracing is a method that delineates all progeny produced by a single cell or a group of cells. The possibility of performing lineage tracing initiated the field of Developmental Biology and continues to revolutionize Stem Cell Biology. Here, I introduce the principles behind a successful lineage-tracing experiment. In addition, I summarize and compare different methods for conducting lineage tracing and provide examples of how these strategies can be implemented to answer fundamental questions in development and regeneration. The advantages and limitations of each method are also discussed. © 2015 AlphaMed Press.
Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.
2017-01-01
With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.
NASA Astrophysics Data System (ADS)
Hicks-Jalali, Shannon; Sica, R. J.; Haefele, Alexander; Martucci, Giovanni
2018-04-01
With only 50% downtime from 2007-2016, the RALMO lidar in Payerne, Switzerland, has one of the largest continuous lidar data sets available. These measurements will be used to produce an extensive lidar water vapour climatology using the Optimal Estimation Method introduced by Sica and Haefele (2016). We will compare our improved technique for external calibration using radiosonde trajectories with the standard external methods, and present the evolution of the lidar constant from 2007 to 2016.
Full-scale aircraft cabin flammability tests of improved fire-resistant materials
NASA Technical Reports Server (NTRS)
Stuckey, R. N.; Surpkis, D. E.; Price, L. J.
1974-01-01
Full-scale aircraft cabin flammability tests to evaluate the effectiveness of new fire-resistant materials by comparing their burning characteristics with those of older aircraft materials are described. Three tests were conducted and are detailed. Test 1, using pre-1968 materials, was run to correlate the procedures and to compare the results with previous tests by other organizations. Test 2 included newer, improved fire-resistant materials. Test 3 was essentially a duplicate of test 2, but a smokeless fuel was used. Test objectives, methods, materials, and results are presented and discussed. Results indicate that the pre-1968 materials ignited easily, allowed the fire to spread, produced large amounts of smoke and toxic combustion products, and resulted in a flash fire and major fire damage. The newer fire-resistant materials did not allow the fire to spread. Furthermore, they produced less, lower concentrations of toxic combustion products, and lower temperatures. The newer materials did not produce a flash fire.
Shivasakthy, M.; Asharaf Ali, Syed
2013-01-01
Statement of Problem: A new material is proposed in dentistry in the form of strips for producing gingival retraction. The clinical efficacy of the material remains untested. Purpose of the Study: This study aimed to determine whether the polyvinyl acetate strips are able to effectively displace the gingival tissues in comparison with the conventional retraction cord. Material and Methods: Complete metal ceramic preparation with supra-gingival margin was performed in fourteen maxillary incisors and gingival retraction was done using Merocel strips and conventional retraction cords alternatively in 2 weeks time interval. The amount of displacement was compared using a digital vernier caliper of 0.01mm accuracy. Results were analyzed statistically using Paired students t-test. Results: The statistical analysis of the data revealed that both the conventional retraction cord and the Merocel strip produce significant retraction. Among both the materials, Merocel proved to be significantly more effective. Conclusion: Merocel strip produces more gingival displacement than the conventional retraction cord. PMID:24298531
Yarbrough, John. M.; Zhang, Ruoran; Mittal, Ashutosh; ...
2017-03-07
Producing fuels, chemicals, and materials from renewable resources to meet societal demands remains an important step in the transition to a sustainable, clean energy economy. The use of cellulolytic enzymes for the production of nanocellulose enables the coproduction of sugars for biofuels production in a format that is largely compatible with the process design employed by modern lignocellulosic (second generation) biorefineries. However, yields of enzymatically produced nanocellulose are typically much lower than those achieved by mineral acid production methods. In this study, we compare the capacity for coproduction of nanocellulose and fermentable sugars using two vastly different cellulase systems: themore » classical 'free enzyme' system of the saprophytic fungus, Trichoderma reesei (T. reesei) and the complexed, multifunctional enzymes produced by the hot springs resident, Caldicellulosiruptor bescii (C. bescii). Here, we demonstrate by comparative digestions that the C. bescii system outperforms the fungal enzyme system in terms of total cellulose conversion, sugar production, and nanocellulose production. In addition, we show by multimodal imaging and dynamic light scattering that the nanocellulose produced by the C. bescii cellulase system is substantially more uniform than that produced by the T. reesei system. These disparities in the yields and characteristics of the nanocellulose produced by these disparate systems can be attributed to the dramatic differences in the mechanisms of action of the dominant enzymes in each system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarbrough, John. M.; Zhang, Ruoran; Mittal, Ashutosh
Producing fuels, chemicals, and materials from renewable resources to meet societal demands remains an important step in the transition to a sustainable, clean energy economy. The use of cellulolytic enzymes for the production of nanocellulose enables the coproduction of sugars for biofuels production in a format that is largely compatible with the process design employed by modern lignocellulosic (second generation) biorefineries. However, yields of enzymatically produced nanocellulose are typically much lower than those achieved by mineral acid production methods. In this study, we compare the capacity for coproduction of nanocellulose and fermentable sugars using two vastly different cellulase systems: themore » classical 'free enzyme' system of the saprophytic fungus, Trichoderma reesei (T. reesei) and the complexed, multifunctional enzymes produced by the hot springs resident, Caldicellulosiruptor bescii (C. bescii). Here, we demonstrate by comparative digestions that the C. bescii system outperforms the fungal enzyme system in terms of total cellulose conversion, sugar production, and nanocellulose production. In addition, we show by multimodal imaging and dynamic light scattering that the nanocellulose produced by the C. bescii cellulase system is substantially more uniform than that produced by the T. reesei system. These disparities in the yields and characteristics of the nanocellulose produced by these disparate systems can be attributed to the dramatic differences in the mechanisms of action of the dominant enzymes in each system.« less
Producing liquid fuels from biomass
NASA Astrophysics Data System (ADS)
Solantausta, Yrjo; Gust, Steven
The aim of this survey was to compare, on techno-economic criteria, alternatives of producing liquid fuels from indigenous raw materials in Finland. Another aim was to compare methods under development and prepare a proposal for steering research related to this field. Process concepts were prepared for a number of alternatives, as well as analogous balances and production and investment cost assessments for these balances. Carbon dioxide emissions of the alternatives and the price of CO2 reduction were also studied. All the alternatives for producing liquid fuels from indigenous raw materials are utmost unprofitable. There are great differences between the alternatives. While the production cost of ethanol is 6 to 9 times higher than the market value of the product, the equivalent ratio for substitute fuel oil produced from peat by pyrolysis is 3 to 4. However, it should be borne in mind that the technical uncertainties related to the alternatives are of different magnitude. Production of ethanol from barley is of commercial technology, while biomass pyrolysis is still under development. If the aim is to reach smaller carbon dioxide emissions by using liquid biofuels, the most favorable alternative is pyrolysis oil produced from wood. Fuels produced from cultivated biomass are more expensive ways of reducing CO2 emissions. Their potential of reducing CO2 emissions in Finland is insignificant. Integration of liquid fuel production to some other production line is more profitable.
Katharopoulos, Efstathios; Touloupi, Katerina; Touraki, Maria
2016-08-01
The present study describes the development of a simple and efficient screening system that allows identification and quantification of nine bacteriocins produced by Lactococcus lactis. Cell-free L. lactis extracts presented a broad spectrum of antibacterial activity, including Gram-negative bacteria, Gram-positive bacteria, and fungi. The characterization of their sensitivity to pH, and heat, showed that the extracts retained their antibacterial activity at extreme pH values and in a wide temperature range. The loss of antibacterial activity following treatment of the extracts with lipase or protease suggests a lipoproteinaceous nature of the produced antimicrobials. The extracts were subjected to a purification protocol that employs a two phase extraction using ammonium sulfate precipitation and organic solvent precipitation, followed by ion exchange chromatography, solid phase extraction and HPLC. In the nine fractions that presented antimicrobial activity, bacteriocins were quantified by the turbidometric method using a standard curve of nisin and by the HPLC method with nisin as the external standard, with both methods producing comparable results. Turbidometry appears to be unique in the qualitative determination of bacteriocins but the only method suitable to both separate and quantify the bacteriocins providing increased sensitivity, accuracy, and precision is HPLC. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparison of Interaural Electrode Pairing Methods for Bilateral Cochlear Implants
Dietz, Mathias
2015-01-01
In patients with bilateral cochlear implants (CIs), pairing matched interaural electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, and spatial release from masking. Because clinical procedures typically do not include patient-specific interaural electrode pairing, it remains the case that each electrode is allocated to a generic frequency range, based simply on the electrode number. Two psychoacoustic techniques for determining interaurally paired electrodes have been demonstrated in several studies: interaural pitch comparison and interaural time difference (ITD) sensitivity. However, these two methods are rarely, if ever, compared directly. A third, more objective method is to assess the amplitude of the binaural interaction component (BIC) derived from electrically evoked auditory brainstem responses for different electrode pairings; a method has been demonstrated to be a potential candidate for bilateral CI users. Here, we tested all three measures in the same eight CI users. We found good correspondence between the electrode pair producing the largest BIC and the electrode pair producing the maximum ITD sensitivity. The correspondence between the pairs producing the largest BIC and the pitch-matched electrode pairs was considerably weaker, supporting the previously proposed hypothesis that whilst place pitch might adapt over time to accommodate mismatched inputs, sensitivity to ITDs does not adapt to the same degree. PMID:26631108
Reddy, Mageshni; Moodley, Roshila; Jonnalagadda, Sreekanth B
2012-01-01
Interest in vegetable oil extracted from idioblast cells of avocado fruit is growing. In this study, five extraction methods to produce avocado oil have been compared: traditional solvent extraction using a Soxhlet or ultrasound, Soxhlet extraction combined with microwave or ultra-turrax treatment and supercritical fluid extraction (SFE). Traditional Soxhlet extraction produced the most reproducible results, 64.76 ± 0.24 g oil/100 g dry weight (DW) and 63.67 ± 0.20 g oil/100 g DW for Hass and Fuerte varieties, respectively. Microwave extraction gave the highest yield of oil (69.94%) from the Hass variety. Oils from microwave extraction had the highest fatty acid content; oils from SFE had wider range of fatty acids. Oils from Fuerte variety had a higher monounsaturated: saturated FA ratio (3.45-3.70). SFE and microwave extraction produced the best quality oil, better than traditional Soxhlet extraction, with the least amount of oxidizing metals present. Copyright © Taylor & Francis Group, LLC
Inexpensive 3dB coupler for POF communication by injection-molding production
NASA Astrophysics Data System (ADS)
Haupt, M.; Fischer, U. H. P.
2011-01-01
POFs (polymer optical fibers) gradually replace traditional communication media such as copper and glass within short distance communication systems. Primarily, this is due to their cost-effectiveness and easy handling. POFs are used in various fields of optical communication, e.g. the automotive sector or in-house communication. So far, however, only a few key components for a POF communication network are available. Even basic components, such as splices and couplers, are fabricated manually. Therefore, these circumstances result in high costs and fluctuations in components' performance. Available couplers have high insertion losses due to their manufacturing method. This can only be compensated by higher power budgets. In order to produce couplers with higher performances new fabrication methods are indispensable. A cheap and effective way to produce couplers for POF communication systems is injection molding. The paper gives an overview of couplers available on market, compares their performances, and shows a way to produce couplers by means of injection molding.
Chen, Chun-Yen; Yeh, Kuei-Ling; Aisyah, Rifka; Lee, Duu-Jong; Chang, Jo-Shu
2011-01-01
Microalgae have the ability to mitigate CO(2) emission and produce oil with a high productivity, thereby having the potential for applications in producing the third-generation of biofuels. The key technologies for producing microalgal biofuels include identification of preferable culture conditions for high oil productivity, development of effective and economical microalgae cultivation systems, as well as separation and harvesting of microalgal biomass and oil. This review presents recent advances in microalgal cultivation, photobioreactor design, and harvesting technologies with a focus on microalgal oil (mainly triglycerides) production. The effects of different microalgal metabolisms (i.e., phototrophic, heterotrophic, mixotrophic, and photoheterotrophic growth), cultivation systems (emphasizing the effect of light sources), and biomass harvesting methods (chemical/physical methods) on microalgal biomass and oil production are compared and critically discussed. This review aims to provide useful information to help future development of efficient and commercially viable technology for microalgae-based biodiesel production. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bruce, James F.; Roberts, James J.; Zuellig, Robert E.
2018-05-24
The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.
Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics
RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.
2015-01-01
BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665
Alternative methods to evaluate trial level surrogacy.
Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert
2008-01-01
The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.
A simple and rapid method to isolate purer M13 phage by isoelectric precipitation.
Dong, Dexian; Sutaria, Sanjana; Hwangbo, Je Yeol; Chen, P
2013-09-01
M13 virus (phage) has been extensively used in phage display technology and nanomaterial templating. Our research aimed to use M13 phage to template sulfur nanoparticles for making lithium ion batteries. Traditional methods for harvesting M13 phage from Escherichia coli employ polyethylene glycol (PEG)-based precipitation, and the yield is usually measured by plaque counting. With this method, PEG residue is present in the M13 phage pellet and is difficult to eliminate. To resolve this issue, a method based on isoelectric precipitation was introduced and tested. The isoelectric method resulted in the production of purer phage with a higher yield, compared to the traditional PEG-based method. There is no significant variation in infectivity of the phage prepared using isoelectric precipitation, and the dynamic light scattering data indirectly prove that the phage structure is not damaged by pH adjustment. To maximize phage production, a dry-weight yield curve of M13 phage for various culture times was produced. The yield curve is proportional to the growth curve of E. coli. On a 200-mL culture scale, 0.2 g L(-1) M13 phage (dry-weight) was produced by the isoelectric precipitation method.
Multiratio fusion change detection with adaptive thresholding
NASA Astrophysics Data System (ADS)
Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.
2017-04-01
A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
A statistical evaluation of formation disturbance produced by well- casing installation methods
Morin, R.H.; LeBlanc, D.R.; Teasdale, W.E.
1988-01-01
Water-resources investigations concerned with contaminant transport through aquifers comprised of very loose, unconsolidated sediments have shown that small-scale variations in aquifer characteristics can significantly affect solute transport and dispersion. Commonly, measurement accuracy and resolution have been limited by a borehole environment consisting of an annulus of disturbed sediments produced by the casing-installation method. In an attempt to quantify this disturbance and recognize its impact on the characterization of unconsolidated deposits, three installation methods were examined and compared in a sand-and-gravel outwash at a test site on Cape Cod, Massachusetts. These installation methods were: 1) casing installed in a mud-rotary hole; 2) casing installed in an augered hole; and 3) flush-joint steel casing hammer-driven from land surface. Fifteen wells were logged with epithermal neutron and natural gamma tools. Concludes that augering is the most disruptive of the three casing-installation methods and that driving casing directly, though typically a more time-consuming operation, transmits the least amount of disturbance into the surrounding formation. -from Authors
Security analysis and improvements to the PsychoPass method.
Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko
2013-08-13
In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.
[Calculating Pearson residual in logistic regressions: a comparison between SPSS and SAS].
Xu, Hao; Zhang, Tao; Li, Xiao-song; Liu, Yuan-yuan
2015-01-01
To compare the results of Pearson residual calculations in logistic regression models using SPSS and SAS. We reviewed Pearson residual calculation methods, and used two sets of data to test logistic models constructed by SPSS and STATA. One model contained a small number of covariates compared to the number of observed. The other contained a similar number of covariates as the number of observed. The two software packages produced similar Pearson residual estimates when the models contained a similar number of covariates as the number of observed, but the results differed when the number of observed was much greater than the number of covariates. The two software packages produce different results of Pearson residuals, especially when the models contain a small number of covariates. Further studies are warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlish, John; Thompson, Jeffrey; Dunham, Grant
2014-09-30
Owners of fossil fuel-fired power plants face the challenge of measuring stack emissions of trace metals and acid gases at much lower levels than in the past as a result of increasingly stringent regulations. In the United States, the current reference methods for trace metals and halogens are wet-chemistry methods, U.S. Environmental Protection Agency (EPA) Methods 29 and 26 or 26A, respectively. As a possible alternative to the EPA methods, the Energy & Environmental Research Center (EERC) has developed a novel multielement sorbent trap (MEST) method to be used to sample for trace elements and/or halogens. Sorbent traps offer amore » potentially advantageous alternative to the existing sampling methods, as they are simpler to use and do not require expensive, breakable glassware or handling and shipping of hazardous reagents. Field tests comparing two sorbent trap applications (MEST-H for hydrochloric acid and MEST-M for trace metals) with the reference methods were conducted at two power plant units fueled by Illinois Basin bituminous coal. For hydrochloric acid, MEST measured concentrations comparable to EPA Method 26A at two power plant units, one with and one without a wet flue gas desulfurization scrubber. MEST-H provided lower detection limits for hydrochloric acid than the reference method. Results from a dry stack unit had better comparability between methods than results from a wet stack unit. This result was attributed to the very low emissions in the latter unit, as well as the difficulty of sampling in a saturated flue gas. Based on these results, the MEST-H sorbent traps appear to be a good candidate to serve as an alternative to Method 26A (or 26). For metals, the MEST trap gave lower detection limits compared to EPA Method 29 and produced comparable data for antimony, arsenic, beryllium, cobalt, manganese, selenium, and mercury for most test runs. However, the sorbent material produced elevated blanks for cadmium, nickel, lead, and chromium at levels that would interfere with accurate measurement at U.S. hazardous air pollutant emission limits for existing coal-fired power plant units. Longer sampling times employed during this test program did appear to improve comparative results for these metals. Although the sorbent contribution to the sample was reduced through improved trap design, additional research is still needed to explore lower-background materials before the MEST-M application can be considered as a potential alternative method for all of the trace metals. This subtask was funded through the EERC–U.S. Department of Energy Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FC26-08NT43291. Nonfederal funding was provided by the Electric Power Research Institute, the Illinois Clean Coal Institute, Southern Illinois Power Company, and the Center for Air Toxic Metals Affiliates Program.« less
Sirsat, Sujata A.; Neal, Jack A.
2013-01-01
Aquaponics is an effective method to practice sustainable agriculture and is gaining popularity in the US; however, the microbial safety of aquaponically grown produce needs to be ascertained. Aquaponics is a unique marriage of fish production and soil-free produce (e.g., leafy greens) production. Fish are raised in fresh water tanks that are connected to water filled beds where fruits and vegetables are grown. The fish bi-products create nutrient-rich water that provides the key elements for the growth of plants and vegetables. The objective of this study was to perform a comparative analysis of the microbial safety and quality of aquaponic lettuce and soil grown lettuce (conventional, bagged, certified organic, and field lettuce). Following this, an intervention study was performed to combat foodborne pathogen surrogates (Salmonella and E. coli), spoilage, and fecal microorganisms using 2.5% acetic acid. The results of the comparative analysis study showed that aquaponically grown lettuce had significantly lower concentration of spoilage and fecal microorganisms compared to in-soil grown lettuce. The intervention study showed that diluted vinegar (2.5% acetic acid) significantly reduced Salmonella, E. coli, coliforms, and spoilage microorganisms on fresh lettuce by 2 to 3 log CFU/g. Irrespective of growing methods (in-soil or soilless), it is crucial to incorporate good agricultural practices to reduce microbial contamination on fresh produce. The intervention employed in this study can be proposed to small farmers and consumers to improve quality and safety of leafy greens. PMID:28239132
Sirsat, Sujata A; Neal, Jack A
2013-11-11
Aquaponics is an effective method to practice sustainable agriculture and is gaining popularity in the US; however, the microbial safety of aquaponically grown produce needs to be ascertained. Aquaponics is a unique marriage of fish production and soil-free produce (e.g., leafy greens) production. Fish are raised in fresh water tanks that are connected to water filled beds where fruits and vegetables are grown. The fish bi-products create nutrient-rich water that provides the key elements for the growth of plants and vegetables. The objective of this study was to perform a comparative analysis of the microbial safety and quality of aquaponic lettuce and soil grown lettuce (conventional, bagged, certified organic, and field lettuce). Following this, an intervention study was performed to combat foodborne pathogen surrogates ( Salmonella and E. coli ), spoilage, and fecal microorganisms using 2.5% acetic acid. The results of the comparative analysis study showed that aquaponically grown lettuce had significantly lower concentration of spoilage and fecal microorganisms compared to in-soil grown lettuce. The intervention study showed that diluted vinegar (2.5% acetic acid) significantly reduced Salmonella , E. coli , coliforms, and spoilage microorganisms on fresh lettuce by 2 to 3 log CFU/g. Irrespective of growing methods (in-soil or soilless), it is crucial to incorporate good agricultural practices to reduce microbial contamination on fresh produce. The intervention employed in this study can be proposed to small farmers and consumers to improve quality and safety of leafy greens.
Image processing via level set curvature flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malladi, R.; Sethian, J.A.
We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach. 15 ref., 6 figs.
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.
Catalysts and methods for converting carbonaceous materials to fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, Jesse; Ruddy, Daniel A.; Schaidle, Joshua A.
Catalysts and processes designed to convert DME and/or methanol and hydrogen (H.sub.2) to desirable liquid fuels are described. These catalysts produce the fuels efficiently and with a high selectivity and yield, and reduce the formation of aromatic hydrocarbons by incorporating H.sub.2 into the products. Also described are process methods to further upgrade these fuels to higher molecular weight liquid fuel mixtures, which have physical properties comparable with current commercially used liquid fuels.
Automated inspection and precision grinding of spiral bevel gears
NASA Technical Reports Server (NTRS)
Frint, Harold
1987-01-01
The results are presented of a four phase MM&T program to define, develop, and evaluate an improved inspection system for spiral bevel gears. The improved method utilizes a multi-axis coordinate measuring machine which maps the working flank of the tooth and compares it to nominal reference values stored in the machine's computer. A unique feature of the system is that corrective grinding machine settings can be automatically calculated and printed out when necessary to correct an errant tooth profile. This new method eliminates most of the subjective decision making involved in the present method, which compares contact patterns obtained when the gear set is run under light load in a rolling test machine. It produces a higher quality gear with significant inspection time and cost savings.
Enhanced automated spiral bevel gear inspection
NASA Technical Reports Server (NTRS)
Frint, Harold K.; Glasow, Warren
1992-01-01
Presented here are the results of a manufacturing and technology program to define, develop, and evaluate an enhanced inspection system for spiral bevel gears. The method uses a multi-axis coordinate measuring machine which maps the working surface of the tooth and compares it with nominal reference values stored in the machine's computer. The enhanced technique features a means for automatically calculating corrective grinding machine settings, involving both first and second order changes, to control the tooth profile to within specified tolerance limits. This enhanced method eliminates the subjective decision making involved in the tooth patterning method, still in use today, which compares contract patterns obtained when the gear is set to run under light load in a rolling test machine. It produces a higher quality gear with significant inspection time and cost savings.
Minimization of Residual Stress in an Al-Cu Alloy Forged Plate by Different Heat Treatments
NASA Astrophysics Data System (ADS)
Dong, Ya-Bo; Shao, Wen-Zhu; Jiang, Jian-Tang; Zhang, Bao-You; Zhen, Liang
2015-06-01
In order to improve the balance of mechanical properties and residual stress, various quenching and aging treatments were applied to Al-Cu alloy forged plate. Residual stresses determined by the x-ray diffraction method and slitting method were compared. The surface residual stress measured by x-ray diffraction method was consistent with that measured by slitting method. The residual stress distribution of samples quenched in water with different temperatures (20, 60, 80, and 100 °C) was measured, and the results showed that the boiling water quenching results in a 91.4% reduction in residual stress magnitudes compared with cold water quenching (20 °C), but the tensile properties of samples quenched in boiling water were unacceptably low. Quenching in 80 °C water results in 75% reduction of residual stress, and the reduction of yield strength is 12.7%. The residual stress and yield strength level are considerable for the dimensional stability of aluminum alloy. Quenching samples into 30% polyalkylene glycol quenchants produced 52.2% reduction in the maximum compressive residual stress, and the reduction in yield strength is 19.7%. Moreover, the effects of uphill quenching and thermal-cold cycling on the residual stress were also investigated. Uphill quenching and thermal-cold cycling produced approximately 25-40% reduction in residual stress, while the effect on tensile properties is quite slight.
Cutaneous synergistic analgesia of bupivacaine in combination with dopamine in rats.
Tzeng, Jann-Inn; Wang, Jieh-Neng; Wang, Jhi-Joung; Chen, Yu-Wen; Hung, Ching-Hsia
2016-05-04
The main goal of the study was to investigate the interaction between bupivacaine and dopamine on local analgesia. After the blockade of the cutaneous trunci muscle reflex (CTMR) responses, which occurred following the drugs were subcutaneously injected in rats, the cutaneous analgesic effect of dopamine in a dosage-dependent fashion was compared to that of bupivacaine. Drug-drug interactions were evaluated by isobolographic methods. We showed the dose-dependent effects of dopamine on infiltrative cutaneous analgesia. On the 50% effective dose (ED50) basis, the rank of drug potency was bupivacaine (1.99 [1.92-2.09] μmol/kg) greater than dopamine (190 [181-203] μmol/kg) (P<0.01). At the equianalgesic doses (ED25, ED50, and ED75), dopamine elicited a similar duration of cutaneous analgesia compared with bupivacaine. The addition of dopamine to the bupivacaine solution exhibited a synergistic effect. Our pre-clinical data showed that dopamine produced a dose-dependent effect in producing cutaneous analgesia. When compared with bupivacaine, dopamine produced a lesser potency with a similar duration of cutaneous analgesia. Dopamine added to the bupivacaine preparation resulted in a synergistic analgesic effect. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Engel, Aaron J; Bashford, Gregory R
2015-08-01
Ultrasound based shear wave elastography (SWE) is a technique used for non-invasive characterization and imaging of soft tissue mechanical properties. Robust estimation of shear wave propagation speed is essential for imaging of soft tissue mechanical properties. In this study we propose to estimate shear wave speed by inversion of the first-order wave equation following directional filtering. This approach relies on estimation of first-order derivatives which allows for accurate estimations using smaller smoothing filters than when estimating second-order derivatives. The performance was compared to three current methods used to estimate shear wave propagation speed: direct inversion of the wave equation (DIWE), time-to-peak (TTP) and cross-correlation (CC). The shear wave speed of three homogeneous phantoms of different elastic moduli (gelatin by weight of 5%, 7%, and 9%) were measured with each method. The proposed method was shown to produce shear speed estimates comparable to the conventional methods (standard deviation of measurements being 0.13 m/s, 0.05 m/s, and 0.12 m/s), but with simpler processing and usually less time (by a factor of 1, 13, and 20 for DIWE, CC, and TTP respectively). The proposed method was able to produce a 2-D speed estimate from a single direction of wave propagation in about four seconds using an off-the-shelf PC, showing the feasibility of performing real-time or near real-time elasticity imaging with dedicated hardware.
Spatial Access to Primary Care Providers in Appalachia
Donohoe, Joseph; Marshall, Vince; Tan, Xi; Camacho, Fabian T.; Anderson, Roger T.; Balkrishnan, Rajesh
2016-01-01
Purpose: The goal of this research was to examine spatial access to primary care physicians in Appalachia using both traditional access measures and the 2-step floating catchment area (2SFCA) method. Spatial access to care was compared between urban and rural regions of Appalachia. Methods: The study region included Appalachia counties of Pennsylvania, Ohio, Kentucky, and North Carolina. Primary care physicians during 2008 and total census block group populations were geocoded into GIS software. Ratios of county physicians to population, driving time to nearest primary care physician, and various 2SFCA approaches were compared. Results: Urban areas of the study region had shorter travel times to their closest primary care physician. Provider to population ratios produced results that varied widely from one county to another because of strict geographic boundaries. The 2SFCA method produced varied results depending on the distance decay weight and variable catchment size techniques chose. 2SFCA scores showed greater access to care in urban areas of Pennsylvania, Ohio, and North Carolina. Conclusion: The different parameters of the 2SFCA method—distance decay weights and variable catchment sizes—have a large impact on the resulting spatial access to primary care scores. The findings of this study suggest that using a relative 2SFCA approach, the spatial access ratio method, when detailed patient travel data are unavailable. The 2SFCA method shows promise for measuring access to care in Appalachia, but more research on patient travel preferences is needed to inform implementation. PMID:26906524
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2008-06-01
This report evaluates alternative processes that could be used to produce Pu-238 fueled General Purpose Heat Sources (GPHS) for radioisotope thermoelectric generators (RTG). Fabricating GPHSs with the current process has remained essentially unchanged since its development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the fields of chemistry, manufacturing, ceramics, and control systems. At the Department of Energy’s request, alternate manufacturing methods were compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product. An expert committee performed the evaluationmore » with input from four national laboratories experienced in Pu-238 handling.« less
An analytical approach to obtaining JWL parameters from cylinder tests
NASA Astrophysics Data System (ADS)
Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.
2017-01-01
An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.
Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.
2014-01-01
Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.
Abdul Kamal Nazer, Meeran Mohideen; Hameed, Abdul Rahman Shahul; Riyazuddin, Patel
2004-01-01
A simple and rapid potentiometric method for the estimation of ascorbic acid in pharmaceutical dosage forms has been developed. The method is based on treating ascorbic acid with iodine and titration of the iodide produced equivalent to ascorbic acid with silver nitrate using Copper Based Mercury Film Electrode (CBMFE) as an indicator electrode. Interference study was carried to check possible interference of usual excipients and other vitamins. The precision and accuracy of the method was assessed by the application of lack-of-fit test and other statistical methods. The results of the proposed method and British Pharmacopoeia method were compared using F and t-statistical tests of significance.
NASA Astrophysics Data System (ADS)
Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah
2017-08-01
Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.
Connors, Bret A.; Evan, Andrew P.; Blomgren, Philip M.; Hsi, Ryan S.; Harper, Jonathan D.; Sorensen, Mathew D.; Wang, Yak-Nam; Simon, Julianna C.; Paun, Marla; Starr, Frank; Cunitz, Bryan W.; Bailey, Michael R.; Lingeman, James E.
2013-01-01
Purpose Focused ultrasonic propulsion is a new non-invasive technique designed to move kidney stones and stone fragments out of the urinary collecting system. However, the extent of tissue injury associated with this technique is not known. As such, we quantitated the amount of tissue injury produced by focused ultrasonic propulsion under simulated clinical treatment conditions, and under conditions of higher power or continuous duty cycles, and compared those results to SWL injury. Materials and Methods A human calcium oxalate monohydrate stone and/or nickel beads were implanted (with ureteroscopy) into 3 kidneys of live pigs (45–55 kg) and repositioned using focused ultrasonic propulsion. Additional pig kidneys were exposed to SWL level pulse intensities or continuous ultrasound exposure of 10 minutes duration (ultrasound probe either transcutaneous or on the kidney). These kidneys were compared to 6 kidneys treated with an unmodified Dornier HM3 Lithotripter (2400 shocks, 120 SWs/min and 24 kV). Histological analysis was performed to assess the volume of hemorrhagic tissue injury created by each technique (% functional renal volume, FRV). Results SWL produced a lesion of 1.56±0.45% FRV. Ultrasonic propulsion produced no detectable lesion with the simulated clinical treatment. A lesion of 0.46±0.37% FRV or 1.15±0.49% FRV could be produced if excessive treatment parameters were used while the ultrasound probe was placed on the kidney. Conclusions Focused ultrasonic propulsion produced no detectable morphological injury to the renal parenchyma when using clinical treatment parameters and produced injury comparable in size to SWL when using excessive treatment parameters. PMID:23917165
Effect of Phosphoric Acid Concentration on the Characteristics of Sugarcane Bagasse Activated Carbon
NASA Astrophysics Data System (ADS)
Adib, M. R. M.; Suraya, W. M. S. W.; Rafidah, H.; Amirza, A. R. M.; Attahirah, M. H. M. N.; Hani, M. S. N. Q.; Adnan, M. S.
2016-07-01
Impregnation method is one of the crucial steps involved in producing activated carbon using chemical activation process. Chemicals employed in this step is effective at decomposing the structure of material and forming micropores that helps in adsorption of contaminants. This paper explains thorough procedures that have been involved in producing sugarcane bagasse activated carbon (SBAC) by using 5%, 10%, 20%, 30% phosphoric acid (H3PO4) during the impregnation step. Concentration of H3PO4 used in the process of producing SBAC was optimized through several tests including bulk density, ash content, iodine adsorption and pore size diameter and the charactesristic of optimum SBAC produced has been compared with commercial activated carbon (CAC). Batch study has been carried out by using the SBAC produced from optimum condition to investigate the performance of SBAC in removal of turbidity and chemical oxygen demand (COD) from textile wastewater. From characteristic study, SBAC with 30% H3PO4 has shown the optimum value of bulk density, ash content, iodine adsorption and pore size diameter of 0.3023 g cm-3, 4.35%, 974.96 mg/g and 0.21-0.41 µm, respectively. These values are comparable to the characteristics of CAC. Experimental result from the batch study has been concluded that the SBAC has a promising potential in removing turbidity and COD of 75.5% and 66.3%, respectively which was a slightly lower than CAC which were able to remove 82.8% of turbidity and 70% of COD. As a conclusion, the SBAC is comparable with CAC in terms of their characteristics and the capability of removing contaminants from textile wastewater. Therefore, it has a commercial value to be used as an alternative of low-cost material in producing CAC.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Demons versus Level-Set motion registration for coronary 18F-sodium fluoride PET.
Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R; Fletcher, Alison; Motwani, Manish; Thomson, Louise E; Germano, Guido; Dey, Damini; Berman, Daniel S; Newby, David E; Slomka, Piotr J
2016-02-27
Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18 F-sodium fluoride ( 18 F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18 F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18 F-NaF PET. To this end, fifteen patients underwent 18 F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18 F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.
Demons versus level-set motion registration for coronary 18F-sodium fluoride PET
NASA Astrophysics Data System (ADS)
Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.
2016-03-01
Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.
Optimized digital filtering techniques for radiation detection with HPGe detectors
NASA Astrophysics Data System (ADS)
Salathe, Marco; Kihm, Thomas
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of 1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Dynamically Evolving Sectors for Convective Weather Impact
NASA Technical Reports Server (NTRS)
Drew, Michael C.
2010-01-01
A new strategy for altering existing sector boundaries in response to blocking convective weather is presented. This method seeks to improve the reduced capacity of sectors directly affected by weather by moving boundaries in a direction that offers the greatest capacity improvement. The boundary deformations are shared by neighboring sectors within the region in a manner that preserves their shapes and sizes as much as possible. This reduces the controller workload involved with learning new sector designs. The algorithm that produces the altered sectors is based on a force-deflection mesh model that needs only nominal traffic patterns and the shape of the blocking weather for input. It does not require weather-affected traffic patterns that would have to be predicted by simulation. When compared to an existing optimal sector design method, the sectors produced by the new algorithm are more similar to the original sector shapes, resulting in sectors that may be more suitable for operational use because the change is not as drastic. Also, preliminary results show that this method produces sectors that can equitably distribute the workload of rerouted weather-affected traffic throughout the region where inclement weather is present. This is demonstrated by sector aircraft count distributions of simulated traffic in weather-affected regions.
Komagoe, Keiko; Katsu, Takashi
2006-02-01
A luminol chemiluminescence method was used to evaluate the porphyrin-induced photogeneration of hydrogen peroxide (H2O2). This method enabled us to detect H202 in the presence of a high concentration of porphyrin, which was not possible using conventional colorimetry. The limit of detection was about 1 microM. We compared the ability to generate H2O2, using uroporphyrin (UP), hexacarboxylporphyrin (HCP), coproporphyrin (CP), hematoporphyrin (HP), mesoporphyrin (MP), and protoporphyrin (PP). The amount of H2O2 photoproduced was strongly related to the state of the porphyrin in the aqueous solution. UP and HCP, which existed predominantly in a monomeric form, had a good ability to produce H2O2. HP and MP, existing as dimers, showed weak activity. CP, forming a mixture of monomer and dimer, had a moderate ability to produce H2O2. PP, which was highly aggregated, had a good ability. These results demonstrated that the efficiency of porphyrins to produce H2O2 was strongly dependent on their aggregated form, and the dimer suppressed the production of H2O2.
Production of vertical arrays of small diameter single-walled carbon nanotubes
Hauge, Robert H; Xu, Ya-Qiong
2013-08-13
A hot filament chemical vapor deposition method has been developed to grow at least one vertical single-walled carbon nanotube (SWNT). In general, various embodiments of the present invention disclose novel processes for growing and/or producing enhanced nanotube carpets with decreased diameters as compared to the prior art.
ERIC Educational Resources Information Center
Birkeland, Asta
2013-01-01
Photo elicitation has become an important method to produce data in qualitative research. There is quite an extensive literature indicating the benefits of photo elicitation in order to facilitate collaboration in meaning making between researcher and the interviewee. This article addresses dilemmas associated with using photo elicitation in a…
USDA-ARS?s Scientific Manuscript database
Pecan scab (Venturia effusa) is the major yield-limiting disease in the southeastern USA. Although conventional fungicides are available to manage the disease, there is no comparison of organic methods (organically produced nuts attract a higher price). In 2011, 2012, 2014, 2015 and 2016 trees of cv...
Are insular populations of the Philippine falconet (Microhierax erythrogenys) steps in a cline?
Todd E. Katzner; Nigel J. Collar
2013-01-01
Founder effects, new environments, and competition often produce changes in species colonizing islands, although the resulting endemism sometimes requires molecular identification. One method to identify fruitful areas for more detailed genetic study is through comparative morphological analyses. We measured 210 museum specimens to evaluate the potential morphological...
Application of Computer Aided Mathematics Teaching in a Secondary School
ERIC Educational Resources Information Center
Yenitepe, Mehmet Emin; Karadag, Zekeriya
2003-01-01
This is a case study that examines the effect of using presentations developed by teacher in addition to using commercially produced educational software CD-ROM in Audio-Visual Room/Computer Laboratory after classroom teaching, on students' academic achievement, as a method of Teaching Mathematics compared with only classroom teaching or after…
Synthesis of pulping processes with fiber loading methods for lightweight papers
John H. Klungness; Roland Gleisner; Masood Akhtar; Eric G. Horn; Mike Lentz
2003-01-01
Pulping technologies can be synthesized with fiber loading with simultaneous alkaline peroxide bleaching to produce lightweight high-opacity printing papers. We compared the results of recent experiments on combining oxalic acid pretreated wood chips used for thermomechanical pulp (TMP) with fiber loading and previous experiments on combining similar pulps treated with...
USDA-ARS?s Scientific Manuscript database
As scientists, producers, policymakers, and the general public become more concerned about impacts of climate change, there is an increasing need to understand and quantify greenhouse gas emissions from agricultural practices, which often feed into global, multi-institution databases. Current best p...
Aerial photo interpretation of understories in two Oregon oak stands.
H. Gyde Lund; George R. Fahnestock; John F. Wear
1967-01-01
Aerial color photography has shown promise for evaluating understory vegetation as a forest-fire fuel. Mapping understory vegetation from special aerial photography produced results reasonably similar to those obtained by an independent ground check. Differences in the methods used in the exploratory work prevented strict comparability, but agreement was close enough...
Effect of Curriculum Change on Exam Performance in a 4-Week Psychiatry Clerkship
ERIC Educational Resources Information Center
Niedermier, Julie; Way, David; Kasick, David; Kuperschmidt, Rada
2010-01-01
Objective: The authors investigated whether curriculum change could produce improved performance, despite a reduction in clerkship length from 8 to 4 weeks. Methods: The exam performance of medical students completing a 4-week clerkship in psychiatry was compared to national data from the National Board of Medical Examiners' Psychiatry Subject…
Comparing extinction risk and economic cost in wildlife conservation planning
Robert G. Haight
1995-01-01
Planning regulations pursuant to the National Forest Management Act of 1976 require the USDA Forest Service to produce cost-effective, multiple-use forest plans that ensure the viability of native wildlife populations within the planning area. In accordance with these regulations, this paper presents a method for determining cost-effective conservation plans for...
Tongue-Palate Contact Pressure, Oral Air Pressure, and Acoustics of Clear Speech
ERIC Educational Resources Information Center
Searl, Jeff; Evitts, Paul M.
2013-01-01
Purpose: The authors compared articulatory contact pressure (ACP), oral air pressure (Po), and speech acoustics for conversational versus clear speech. They also assessed the relationship of these measures to listener perception. Method: Twelve adults with normal speech produced monosyllables in a phrase using conversational and clear speech.…
Apparatus for measuring resistance change only in a cell analyzer and method for calibrating it
Hoffman, Robert A.
1980-01-01
The disclosure relates to resistance only monitoring and calibration in an electrical cell analyzer. Sample and sheath fluid flows of different salinities are utilized, the sample flow being diameter modulated to produce a selected pattern which is compared to the resistance measured across the flows.
ERIC Educational Resources Information Center
IRIS Center, 2014
2014-01-01
This report presents two studies that demonstrate superior performance of college students after use of free, online instructional Modules, produced by the IRIS Center. When compared to traditional teacher education methods, IRIS Modules yield better outcomes in terms of knowledge acquisition, application skills, and confidence in the use of…
A Curriculum Development Model Based on Deforestation and the Work of Kafka.
ERIC Educational Resources Information Center
Kember, David
1991-01-01
A tongue-in-cheek look at methods used for curriculum development by many colleges and universities compares the process to two others: destruction of trees and trial by ordeal. Forests are destroyed to produce large quantities of paper for printing of curricula in many versions, followed by Kafkaesque committee scrutiny. (MSE)
Autoclave method for rapid preparation of bacterial PCR-template DNA.
Simmon, Keith E; Steadman, Dewey D; Durkin, Sarah; Baldwin, Amy; Jeffrey, Wade H; Sheridan, Peter; Horton, Rene; Shields, Malcolm S
2004-02-01
An autoclave method for preparing bacterial DNA for PCR template is presented, it eliminates the use of detergents, organic solvents, and mechanical cellular disruption approaches, thereby significantly reducing processing time and costs while increasing reproducibility. Bacteria are lysed by rapid heating and depressurization in an autoclave. The lysate, cleared by microcentrifugation, was either used directly in the PCR reaction, or concentrated by ultrafiltration. This approach was compared with seven established methods of DNA template preparation from four bacterial sources which included boiling Triton X-100 and SDS, bead beating, lysozyme/proteinase K, and CTAB lysis method components. Bacteria examined were Enterococcus and Escherichia coli, a natural marine bacterial community and an Antarctic cyanobacterial-mat. DNAs were tested for their suitability as PCR templates by repetitive element random amplified polymorphic DNA (RAPD) and denaturing gradient gel electrophoresis (DGGE) analysis. The autoclave method produced PCR amplifiable template comparable or superior to the other methods, with greater reproducibility, much shorter processing time, and at a significantly lower cost.
NASA Astrophysics Data System (ADS)
Royani, J. I.; Safarrida, A.; Rachmawati, I.; Khairiyah, H.; Mustika, I. P.; Suyono, A.; Rudiyana, Y.; Kubil; Nurjaya; Arianto, A.
2017-05-01
Rubber from Hevea brasiliensis is the only commercial natural rubber in the world. Propagation of rubber trees usually done by grafting and seed germination. BPPT had been producing rubber tree by in vitro technique with embryo somatic methods. Validation of mother plant for in vitro propagation is important to compare between mother plant and propagated plants. The aim for this research was to validation of PB 260 clone that planted at Cikumpay Plantation by SSR marker. Sampling of 10 rubber leaves were done at Cikumpay Plantation based on GPS position from the area of PB 260 clone. Rubber leaves were isolated with CTAB modification method to obtained DNA. Four of SSR primers from rubber, i.e.: hmac 4, hmac 5, hmct 1, and hmct 5, were used as primers to amplification of rubber DNA. The result showed that no band that different from 10 rubber of PB 260 clone at Cikumpay Plantation. This research will continue to compare genomic validation between mother plant and propagated plants that had been produced from BPPT.
Zhong, Sihua; Wang, Wenjie; Tan, Miao; Zhuang, Yufeng
2017-01-01
Abstract Large‐scale (156 mm × 156 mm) quasi‐omnidirectional solar cells are successfully realized and featured by keeping high cell performance over broad incident angles (θ), via employing Si nanopyramids (SiNPs) as surface texture. SiNPs are produced by the proposed metal‐assisted alkaline etching method, which is an all‐solution‐processed method and highly simple together with cost‐effective. Interestingly, compared to the conventional Si micropyramids (SiMPs)‐textured solar cells, the SiNPs‐textured solar cells possess lower carrier recombination and thus superior electrical performances, showing notable distinctions from other Si nanostructures‐textured solar cells. Furthermore, SiNPs‐textured solar cells have very little drop of quantum efficiency with increasing θ, demonstrating the quasi‐omnidirectional characteristic. As an overall result, both the SiNPs‐textured homojunction and heterojunction solar cells possess higher daily electric energy production with a maximum relative enhancement approaching 2.5%, when compared to their SiMPs‐textured counterparts. The quasi‐omnidirectional solar cell opens a new opportunity for photovoltaics to produce more electric energy with a low cost. PMID:29201616
Zhong, Sihua; Wang, Wenjie; Tan, Miao; Zhuang, Yufeng; Shen, Wenzhong
2017-11-01
Large-scale (156 mm × 156 mm) quasi-omnidirectional solar cells are successfully realized and featured by keeping high cell performance over broad incident angles (θ), via employing Si nanopyramids (SiNPs) as surface texture. SiNPs are produced by the proposed metal-assisted alkaline etching method, which is an all-solution-processed method and highly simple together with cost-effective. Interestingly, compared to the conventional Si micropyramids (SiMPs)-textured solar cells, the SiNPs-textured solar cells possess lower carrier recombination and thus superior electrical performances, showing notable distinctions from other Si nanostructures-textured solar cells. Furthermore, SiNPs-textured solar cells have very little drop of quantum efficiency with increasing θ, demonstrating the quasi-omnidirectional characteristic. As an overall result, both the SiNPs-textured homojunction and heterojunction solar cells possess higher daily electric energy production with a maximum relative enhancement approaching 2.5%, when compared to their SiMPs-textured counterparts. The quasi-omnidirectional solar cell opens a new opportunity for photovoltaics to produce more electric energy with a low cost.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, L; Yin, F; Cai, J
Purpose: To develop a methodology of constructing physiological-based virtual thorax phantom based on hyperpolarized (HP) gas tagging MRI for evaluating deformable image registration (DIR). Methods: Three healthy subjects were imaged at both the end-of-inhalation (EOI) and the end-of-exhalation (EOE) phases using a high-resolution (2.5mm isovoxel) 3D proton MRI, as well as a hybrid MRI which combines HP gas tagging MRI and a low-resolution (4.5mm isovoxel) proton MRI. A sparse tagging displacement vector field (tDVF) was derived from the HP gas tagging MRI by tracking the displacement of tagging grids between EOI and EOE. Using the tDVF and the high-resolution MRmore » images, we determined the motion model of the entire thorax in the following two steps: 1) the DVF inside of lungs was estimated based on the sparse tDVF using a novel multi-step natural neighbor interpolation method; 2) the DVF outside of lungs was estimated from the DIR between the EOI and EOE images (Velocity AI). The derived motion model was then applied to the high-resolution EOI image to create a deformed EOE image, forming the virtual phantom where the motion model provides the ground truth of deformation. Five DIR methods were evaluated using the developed virtual phantom. Errors in DVF magnitude (Em) and angle (Ea) were determined and compared for each DIR method. Results: Among the five DIR methods, free form deformation produced DVF results that are most closely resembling the ground truth (Em=1.04mm, Ea=6.63°). The two DIR methods based on B-spline produced comparable results (Em=2.04mm, Ea=13.66°; and Em =2.62mm, Ea=17.67°), and the two optical-flow methods produced least accurate results (Em=7.8mm; Ea=53.04°; Em=4.45mm, Ea=31.02°). Conclusion: A methodology for constructing physiological-based virtual thorax phantom based on HP gas tagging MRI has been developed. Initial evaluation demonstrated its potential as an effective tool for robust evaluation of DIR in the lung.« less
Primate comparative neuroscience using magnetic resonance imaging: promises and challenges
Mars, Rogier B.; Neubert, Franz-Xaver; Verhagen, Lennart; Sallet, Jérôme; Miller, Karla L.; Dunbar, Robin I. M.; Barton, Robert A.
2014-01-01
Primate comparative anatomy is an established field that has made rich and substantial contributions to neuroscience. However, the labor-intensive techniques employed mean that most comparisons are often based on a small number of species, which limits the conclusions that can be drawn. In this review we explore how new developments in magnetic resonance imaging have the potential to apply comparative neuroscience to a much wider range of species, allowing it to realize an even greater potential. We discuss (1) new advances in the types of data that can be acquired, (2) novel methods for extracting meaningful measures from such data that can be compared between species, and (3) methods to analyse these measures within a phylogenetic framework. Together these developments will allow researchers to characterize the relationship between different brains, the ecological niche they occupy, and the behavior they produce in more detail than ever before. PMID:25339857
2013-01-01
Background γ-Amino butyric acid (GABA) is a major inhibitory neurotransmitter of the mammalian central nervous system that plays a vital role in regulating vital neurological functions. The enzyme responsible for producing GABA is glutamate decarboxylase (GAD), an intracellular enzyme that both food and pharmaceutical industries are currently using as the major catalyst in trial biotransformation process of GABA. We have successfully isolated a novel strain of Aspergillus oryzae NSK that possesses a relatively high GABA biosynthesizing capability compared to other reported GABA-producing fungal strains, indicating the presence of an active GAD. This finding has prompted us to explore an effective method to recover maximum amount of GAD for further studies on the GAD’s biochemical and kinetic properties. The extraction techniques examined were enzymatic lysis, chemical permeabilization, and mechanical disruption. Under the GAD activity assay used, one unit of GAD activity is expressed as 1 μmol of GABA produced per min per ml enzyme extract (U/ml) while the specific activity was expressed as U/mg protein. Results Mechanical disruption by sonication, which yielded 1.99 U/mg of GAD, was by far the most effective cell disintegration method compared with the other extraction procedures examined. In contrast, the second most effective method, freeze grinding followed by 10% v/v toluene permeabilization at 25°C for 120 min, yielded only 1.17 U/mg of GAD, which is 170% lower than the sonication method. Optimized enzymatic lysis with 3 mg/ml Yatalase® at 60°C for 30 min was the least effective. It yielded only 0.70 U/mg of GAD. Extraction using sonication was further optimized using a one-variable-at-a-time approach (OVAT). Results obtained show that the yield of GAD increased 176% from 1.99 U/mg to 3.50 U/mg. Conclusion Of the techniques used to extract GAD from A. oryzae NSK, sonication was found to be the best. Under optimized conditions, about 176% of GAD was recovered compared to recovery under non optimized conditions. The high production level of GAD in this strain offers an opportunity to conduct further studies on GABA production at a larger scale. PMID:24321181
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.