Yamada, Kageto; Kashiwa, Machiko; Arai, Katsumi; Nagano, Noriyuki; Saito, Ryoichi
2016-09-01
We compared three screening methods for carbapenemase-producing Enterobacteriaceae. While the Modified-Hodge test and Carba NP test produced false-negative results for OXA-48-like and mucoid NDM producers, the carbapenem inactivation method (CIM) showed positive results for these isolates. Although the CIM required cultivation time, it is well suited for general clinical laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.
Improved Methods of Producing and Administering Extracellular Vesicles | Poster
An efficient method of producing purified extracellular vesicles (EVs), in conjunction with a method that blocks liver macrophages from clearing EVs from the body, has produced promising results for the use of EVs in cancer therapy.
Improvement of seawater salt quality by hydro-extraction and re-crystallization methods
NASA Astrophysics Data System (ADS)
Sumada, K.; Dewati, R.; Suprihatin
2018-01-01
Indonesia is one of the salt producing countries that use sea water as a source of raw materials, the quality of salt produced is influenced by the quality of sea water. The resulting average salt quality contains 85-90% NaCl. The Indonesian National Standard (SNI) for human salt’s consumption sodium chloride content is 94.7 % (dry base) and for industrial salt 98,5 %. In this study developed the re-crystallization without chemical and hydro-extraction method. The objective of this research to choose the best methods based on efficiency. The results showed that re-crystallization method can produce salt with NaCl content 99,21%, while hydro-extraction method content 99,34 % NaCl. The salt produced through both methods can be used as a consumption and industrial salt, Hydro-extraction method is more efficient than re-crystallization method because re-crystallization method requires heat energy.
Monotonically improving approximate answers to relational algebra queries
NASA Technical Reports Server (NTRS)
Smith, Kenneth P.; Liu, J. W. S.
1989-01-01
We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.
The Effects of Different Fine Recycled Concrete Aggregates on the Properties of Mortar
Fan, Cheng-Chih; Huang, Ran; Hwang, Howard; Chao, Sao-Jeng
2015-01-01
The practical use of recycled concrete aggregate produced by crushing concrete waste reduces the consumption of natural aggregate and the amount of concrete waste that ends up in landfills. This study investigated two methods used in the production of fine recycled concrete aggregate: (1) a method that produces fine as well as coarse aggregate, and (2) a method that produces only fine aggregate. Mortar specimens were tested using a variety of mix proportions to determine how the characteristics of fine recycled concrete aggregate affect the physical and mechanical properties of the resulting mortars. Our results demonstrate the superiority of mortar produced using aggregate produced using the second of the two methods. Nonetheless, far more energy is required to render concrete into fine aggregate than is required to produce coarse as well as fine aggregate simultaneously. Thus, the performance benefits of using only fine recycled concrete aggregate must be balanced against the increased impact on the environment.
Methods of expanding bacteriophage host-range and bacteriophage produced by the methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crown, Kevin K.; Santarpia, Joshua
A method of producing novel bacteriophages with expanded host-range and bacteriophages with expanded host ranges are disclosed. The method produces mutant phage strains which are infectious to a second host and can be more infectious to their natural host than in their natural state. The method includes repeatedly passaging a selected phage strain into bacterial cultures that contain varied ratios of its natural host bacterial strain with a bacterial strain that the phage of interest is unable to infect; the target-host. After each passage the resulting phage are purified and screened for activity against the target-host via double-overlay assays. Whenmore » mutant phages that are shown to infect the target-host are discovered, they are further propagated in culture that contains only the target-host to produce a stock of the resulting mutant phage.« less
Zhang, Guodong; Brown, Eric W.; González-Escalona, Narjol
2011-01-01
Contamination of foods, especially produce, with Salmonella spp. is a major concern for public health. Several methods are available for the detection of Salmonella in produce, but their relative efficiency for detecting Salmonella in commonly consumed vegetables, often associated with outbreaks of food poisoning, needs to be confirmed. In this study, the effectiveness of three molecular methods for detection of Salmonella in six produce matrices was evaluated and compared to the FDA microbiological detection method. Samples of cilantro (coriander leaves), lettuce, parsley, spinach, tomato, and jalapeno pepper were inoculated with Salmonella serovars at two different levels (105 and <101 CFU/25 g of produce). The inoculated produce was assayed by the FDA Salmonella culture method (Bacteriological Analytical Manual) and by three molecular methods: quantitative real-time PCR (qPCR), quantitative reverse transcriptase real-time PCR (RT-qPCR), and loop-mediated isothermal amplification (LAMP). Comparable results were obtained by these four methods, which all detected as little as 2 CFU of Salmonella cells/25 g of produce. All control samples (not inoculated) were negative by the four methods. RT-qPCR detects only live Salmonella cells, obviating the danger of false-positive results from nonviable cells. False negatives (inhibition of either qPCR or RT-qPCR) were avoided by the use of either a DNA or an RNA amplification internal control (IAC). Compared to the conventional culture method, the qPCR, RT-qPCR, and LAMP assays allowed faster and equally accurate detection of Salmonella spp. in six high-risk produce commodities. PMID:21803916
Airship stresses due to vertical velocity gradients and atmospheric turbulence
NASA Technical Reports Server (NTRS)
Sheldon, D.
1975-01-01
Munk's potential flow method is used to calculate the resultant moment experienced by an ellipsoidal airship. This method is first used to calculate the moment arising from basic maneuvers considered by early designers, and then expended to calculate the moment arising from vertical velocity gradients and atmospheric turbulence. This resultant moment must be neutralized by the transverse force of the fins. The results show that vertical velocity gradients at a height of 6000 feet in thunderstorms produce a resultant moment approximately three to four times greater than the moment produced in still air by realistic values of pitch angle or steady turning. Realistic values of atmospheric turbulence produce a moment which is significantly less than the moment produced by maneuvers in still air.
Withers, Sydnor T.; Gottlieb, Shayin S.; Lieu, Bonny; Newman, Jack D.; Keasling, Jay D.
2007-01-01
We have developed a novel method to clone terpene synthase genes. This method relies on the inherent toxicity of the prenyl diphosphate precursors to terpenes, which resulted in a reduced-growth phenotype. When these precursors were consumed by a terpene synthase, normal growth was restored. We have demonstrated that this method is capable of enriching a population of engineered Escherichia coli for those clones that express the sesquiterpene-producing amorphadiene synthase. In addition, we enriched a library of genomic DNA from the isoprene-producing bacterium Bacillus subtilis strain 6051 in E. coli engineered to produce elevated levels of isopentenyl diphosphate and dimethylallyl diphosphate. The selection resulted in the discovery of two genes (yhfR and nudF) whose protein products acted directly on the prenyl diphosphate precursors and produced isopentenol. Expression of nudF in E. coli engineered with the mevalonate-based isopentenyl pyrophosphate biosynthetic pathway resulted in the production of isopentenol. PMID:17693564
Mohanram, Rajamani; Jagtap, Chandrakant; Kumar, Pradeep
2016-04-15
Diverse marine bacterial species predominantly found in oil-polluted seawater produce diverse surface-active agents. Surface-active agents produced by bacteria are classified into two groups based on their molecular weights, namely biosurfactants and bioemulsifiers. In this study, surface-active agent-producing, oil-degrading marine bacteria were isolated using a modified Bushnell-Haas medium with high-speed diesel as a carbon source from three oil-polluted sites of Mumbai Harbor. Surface-active agent-producing bacterial strains were screened using nine widely used methods. The nineteen bacterial strains showed positive results for more than four surface-active agent screening methods; further, these strains were characterized using biochemical and nucleic acid sequencing methods. Based on the results, the organisms belonged to the genera Acinetobacter, Alcanivorax, Bacillus, Comamonas, Chryseomicrobium, Halomonas, Marinobacter, Nesterenkonia, Pseudomonas, and Serratia. The present study confirmed the prevalence of surface-active agent-producing bacteria in the oil-polluted waters of Mumbai Harbor. Copyright © 2016 Elsevier Ltd. All rights reserved.
Herbert, Wendy J; Davidson, Adam G; Buford, John A
2010-06-01
The pontomedullary reticular formation (PMRF) of the monkey produces motor outputs to both upper limbs. EMG effects evoked from stimulus-triggered averaging (StimulusTA) were compared with effects from stimulus trains to determine whether both stimulation methods produced comparable results. Flexor and extensor muscles of scapulothoracic, shoulder, elbow, and wrist joints were studied bilaterally in two male M. fascicularis monkeys trained to perform a bilateral reaching task. The frequency of facilitation versus suppression responses evoked in the muscles was compared between methods. Stimulus trains were more efficient (94% of PMRF sites) in producing responses than StimulusTA (55%), and stimulus trains evoked responses from more muscles per site than from StimulusTA. Facilitation (72%) was more common from stimulus trains than StimulusTA (39%). In the overall results, a bilateral reciprocal activation pattern of ipsilateral flexor and contralateral extensor facilitation was evident for StimulusTA and stimulus trains. When the comparison was restricted to cases where both methods produced a response in a given muscle from the same site, agreement was very high, at 80%. For the remaining 20%, discrepancies were accounted for mainly by facilitation from stimulus trains when StimulusTA produced suppression, which was in agreement with the under-representation of suppression in the stimulus train data as a whole. To the extent that the stimulus train method may favor transmission through polysynaptic pathways, these results suggest that polysynaptic pathways from the PMRF more often produce facilitation in muscles that would typically demonstrate suppression with StimulusTA.
MRL and SuperFine+MRL: new supertree methods
2012-01-01
Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525
Composition and methods for improved fuel production
Steele, Philip H.; Tanneru, Sathishkumar; Gajjela, Sanjeev K.
2015-12-29
Certain embodiments of the present invention are configured to produce boiler and transportation fuels. A first phase of the method may include oxidation and/or hyper-acidification of bio-oil to produce an intermediate product. A second phase of the method may include catalytic deoxygenation, esterification, or olefination/esterification of the intermediate product under pressurized syngas. The composition of the resulting product--e.g., a boiler fuel--produced by these methods may be used directly or further upgraded to a transportation fuel. Certain embodiments of the present invention also include catalytic compositions configured for use in the method embodiments.
Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling
2012-01-01
An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.
Method of producing a high pressure gas
Bingham, Dennis N.; Klingler, Kerry M.; Zollinger, William T.
2006-07-18
A method of producing a high pressure gas is disclosed and which includes providing a container; supplying the container with a liquid such as water; increasing the pressure of the liquid within the container; supplying a reactant composition such as a chemical hydride to the liquid under pressure in the container and which chemically reacts with the liquid to produce a resulting high pressure gas such as hydrogen at a pressure of greater than about 100 pounds per square inch of pressure; and drawing the resulting high pressure gas from the container.
Sornpaisarn, Bundit; Kaewmungkun, Chuthaporn; Rehm, Jürgen
2015-11-01
To examine patterns of tax burdens produced by specific, ad valorem, and various types of combination taxations. One hundred unique hypothetical alcoholic beverages were mathematically simulated based on the amount of ethanol and perceived-qualities contained. Second, beverages were assigned values of various costs and tax rates, and third, patterns of tax burden were assessed per unit of ethanol produced by each type of tax method. Different tax methods produced different tax burdens per unit of ethanol for different alcoholic beverages. The tax burden produced by the ad valorem tax resulted in a lower tax burden for low perceived-quality alcoholic beverages. The specific tax method showed the same tax burden for both low and high perceived-quality alcoholic beverages. However, high perceived-quality beverages benefited from a lower tax burden per beverage price. Lastly, the combination tax method resulted in a lower tax burden for medium perceived-quality alcoholic beverages. Under the oligopoly market, ad valorem taxation encourages consumption of low perceived-quality beverages; specific taxation encourages consumption of high perceived-quality beverages; and combination tax methods encourage consumption of medium perceived-quality beverages. © The Author 2015. Medical Council on Alcohol and Oxford University Press. All rights reserved.
A Novel Method for Producing Light GMT Sheets by a Pneumatic Technique
NASA Astrophysics Data System (ADS)
Dai, H.-L.; Rao, Y.-N.
2015-09-01
A novel method for producing a kind of light glass-mat- reinforced thermoplastic (GMT) sheets by using a pneumatic technique is presented. The tensile and flexural properties of produced light GMT sheets, with various lengths of glass fibers and PP content, were determined experimentally. Results of the experimental investigation show that the light GMT sheets are fully suitable for engineering applications.
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
Ptak, Aaron Joseph; Lin, Yong; Norman, Andrew; Alberi, Kirstin
2015-05-26
A method of producing semiconductor materials and devices that incorporate the semiconductor materials are provided. In particular, a method is provided of producing a semiconductor material, such as a III-V semiconductor, on a spinel substrate using a sacrificial buffer layer, and devices such as photovoltaic cells that incorporate the semiconductor materials. The sacrificial buffer material and semiconductor materials may be deposited using lattice-matching epitaxy or coincident site lattice-matching epitaxy, resulting in a close degree of lattice matching between the substrate material and deposited material for a wide variety of material compositions. The sacrificial buffer layer may be dissolved using an epitaxial liftoff technique in order to separate the semiconductor device from the spinel substrate, and the spinel substrate may be reused in the subsequent fabrication of other semiconductor devices. The low-defect density semiconductor materials produced using this method result in the enhanced performance of the semiconductor devices that incorporate the semiconductor materials.
Comparison of risk assessment procedures used in OCRA and ULRA methods
Roman-Liu, Danuta; Groborz, Anna; Tokarski, Tomasz
2013-01-01
The aim of this study was to analyse the convergence of two methods by comparing exposure and the assessed risk of developing musculoskeletal disorders at 18 repetitive task workstations. The already established occupational repetitive actions (OCRA) and the recently developed upper limb risk assessment (ULRA) produce correlated results (R = 0.84, p = 0.0001). A discussion of the factors that influence the values of the OCRA index and ULRA's repetitive task indicator shows that both similarities and differences in the results produced by the two methods can arise from the concepts that underlie them. The assessment procedure and mathematical calculations that the basic parameters are subjected to are crucial to the results of risk assessment. The way the basic parameters are defined influences the assessment of exposure and risk assessment to a lesser degree. The analysis also proved that not always do great differences in load indicator values result in differences in risk zones. Practitioner Summary: We focused on comparing methods that, even though based on different concepts, serve the same purpose. The results proved that different methods with different assumptions can produce similar assessment of upper limb load; sharp criteria in risk assessment are not the best solution. PMID:24041375
Comparison of bursting pressure results of LPG tank using experimental and finite element method.
Aksoley, M Egemen; Ozcelik, Babur; Bican, Ismail
2008-03-01
In this study, the resistance of liquefied-petroleum gas (LPG) tanks produced from carbon steel sheet metal of different thicknesses has been investigated by bursting pressure experiments and non-linear Finite Element Method (FEM) method by increasing internal pressure values. The designs of LPG tanks produced from sheet metal to be used at the study have been realized by analytical calculations made taking into consideration of related standards. Bursting pressure tests have been performed that were inclined to decreasing the sheet thickness of LPG tanks used in industry. It has been shown that the LPG tanks can be produced in compliance with the standards when the sheet thickness is lowered from 3 to 2.8mm. The FEM results have displayed close values with the bursting results obtained from the experiments.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.
Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F
2015-10-15
A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.
A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays
Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.
2015-01-01
A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858
Liu, Jien-Wei; Ko, Wen-Chien; Huang, Cheng-Hua; Liao, Chun-Hsing; Lu, Chin-Te; Chuang, Yin-Ching; Tsao, Shih-Ming; Chen, Yao-Shen; Liu, Yung-Ching; Chen, Wei-Yu; Jang, Tsrang-Neng; Lin, Hsiu-Chen; Chen, Chih-Ming; Shi, Zhi-Yuan; Pan, Sung-Ching; Yang, Jia-Ling; Kung, Hsiang-Chi; Liu, Chun-Eng; Cheng, Yu-Jen; Chen, Yen-Hsu; Lu, Po-Liang; Sun, Wu; Wang, Lih-Shinn; Yu, Kwok-Woon; Chiang, Ping-Cherng; Lee, Ming-Hsun; Lee, Chun-Ming; Hsu, Gwo-Jong
2012-01-01
The Tigecycline In Vitro Surveillance in Taiwan (TIST) study, initiated in 2006, is a nationwide surveillance program designed to longitudinally monitor the in vitro activity of tigecycline against commonly encountered drug-resistant bacteria. This study compared the in vitro activity of tigecycline against 3,014 isolates of clinically important drug-resistant bacteria using the standard broth microdilution and disk diffusion methods. Species studied included methicillin-resistant Staphylococcus aureus (MRSA; n = 759), vancomycin-resistant Enterococcus faecium (VRE; n = 191), extended-spectrum β-lactamase (ESBL)-producing Escherichia coli (n = 602), ESBL-producing Klebsiella pneumoniae (n = 736), and Acinetobacter baumannii (n = 726) that had been collected from patients treated between 2008 and 2010 at 20 hospitals in Taiwan. MICs and inhibition zone diameters were interpreted according to the currently recommended U.S. Food and Drug Administration (FDA) criteria and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria. The MIC90 values of tigecycline against MRSA, VRE, ESBL-producing E. coli, ESBL-producing K. pneumoniae, and A. baumannii were 0.5, 0.125, 0.5, 2, and 8 μg/ml, respectively. The total error rates between the two methods using the FDA criteria were high: 38.4% for ESBL-producing K. pneumoniae and 33.8% for A. baumannii. Using the EUCAST criteria, the total error rate was also high (54.6%) for A. baumannii isolates. The total error rates between these two methods were <5% for MRSA, VRE, and ESBL-producing E. coli. For routine susceptibility testing of ESBL-producing K. pneumoniae and A. baumannii against tigecycline, the broth microdilution method should be used because of the poor correlation of results between these two methods. PMID:22155819
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
Method and apparatus for producing cryogenic targets
Murphy, James T.; Miller, John R.
1984-01-01
An improved method and apparatus are given for producing cryogenic inertially driven fusion targets in the fast isothermal freezing (FIF) method. Improved coupling efficiency and greater availability of volume near the target for diagnostic purposes and for fusion driver beam propagation result. Other embodiments include a new electrical switch and a new explosive detonator, all embodiments making use of a purposeful heating by means of optical fibers.
Lee, Wan-Ning; Huang, Ching-Hua; Zhu, Guangxuan
2018-08-01
Chlorine sanitizers used in washing fresh and fresh-cut produce can lead to generation of disinfection by-products (DBPs) that are harmful to human health. Monitoring of DBPs is necessary to protect food safety but comprehensive analytical methods have been lacking. This study has optimized three U.S. Environmental Protection Agency methods for drinking water DBPs to improve their performance for produce wash water. The method development encompasses 40 conventional and emerging DBPs. Good recoveries (60-130%) were achieved for most DBPs in deionized water and in lettuce, strawberry and cabbage wash water. The method detection limits are in the range of 0.06-0.58 μg/L for most DBPs and 10-24 ng/L for nitrosamines in produce wash water. Preliminary results revealed the formation of many DBPs when produce is washed with chlorine. The optimized analytical methods by this study effectively reduce matrix interference and can serve as useful tools for future research on food DBPs. Copyright © 2018 Elsevier Ltd. All rights reserved.
One step linear reconstruction method for continuous wave diffuse optical tomography
NASA Astrophysics Data System (ADS)
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Wet-chemical systems and methods for producing black silicon substrates
Yost, Vernon; Yuan, Hao-Chih; Page, Matthew
2015-05-19
A wet-chemical method of producing a black silicon substrate. The method comprising soaking single crystalline silicon wafers in a predetermined volume of a diluted inorganic compound solution. The substrate is combined with an etchant solution that forms a uniform noble metal nanoparticle induced Black Etch of the silicon wafer, resulting in a nanoparticle that is kinetically stabilized. The method comprising combining with an etchant solution having equal volumes acetonitrile/acetic acid:hydrofluoric acid:hydrogen peroxide.
Measuring signal-to-noise ratio in partially parallel imaging MRI
Goerner, Frank L.; Clarke, Geoffrey D.
2011-01-01
Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049
[An improved low spectral distortion PCA fusion method].
Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong
2013-10-01
Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.
CdTe devices and method of manufacturing same
Gessert, Timothy A.; Noufi, Rommel; Dhere, Ramesh G.; Albin, David S.; Barnes, Teresa; Burst, James; Duenow, Joel N.; Reese, Matthew
2015-09-29
A method of producing polycrystalline CdTe materials and devices that incorporate the polycrystalline CdTe materials are provided. In particular, a method of producing polycrystalline p-doped CdTe thin films for use in CdTe solar cells in which the CdTe thin films possess enhanced acceptor densities and minority carrier lifetimes, resulting in enhanced efficiency of the solar cells containing the CdTe material are provided.
Microorganisms having enhanced resistance to acetate and methods of use
Brown, Steven D; Yang, Shihui
2014-10-21
The present invention provides isolated or genetically modified strains of microorganisms that display enhanced resistance to acetate as a result of increased expression of a sodium proton antiporter. The present invention also provides methods for producing such microbial strains, as well as related promoter sequences and expression vectors. Further, the present invention provides methods of producing alcohol from biomass materials by using microorganisms with enhanced resistance to acetate.
Method and apparatus for producing cryogenic targets
Murphy, J.T.; Miller, J.R.
1984-08-07
An improved method and apparatus are given for producing cryogenic inertially driven fusion targets in the fast isothermal freezing (FIF) method. Improved coupling efficiency and greater availability of volume near the target for diagnostic purposes and for fusion driver beam propagation result. Other embodiments include a new electrical switch and a new explosive detonator, all embodiments making use of a purposeful heating by means of optical fibers. 6 figs.
Shen, L; Levine, S H; Catchen, G L
1987-07-01
This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.
Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.
Li, Ben; Stenstrom, M K
2014-01-01
One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.
Lee, C J; Park, J H; Ciesielski, T E; Thomson, J G; Persing, J A
2008-11-01
A variety of new methods for treating photoaging have been recently introduced. There has been increasing interest in comparing the relative efficacy of multiple methods for photoaging. However, the efficacy of a single method is difficult to assess from the data reported in the literature. Photoaged hairless mice were randomly divided into seven treatment groups: control, retinoids (tretinoin and adapalene), lasers (585 nm and CO(2)), and combination groups (585 nm + adapalene and CO(2 )+ adapalene). Biopsies were taken from the treated regions, and the results were analyzed based on the repair zone. The repair zones of the various methods for photoaging were compared. Retinoids produced a wider repair zone than the control condition. The 585-nm and CO(2) laser resurfacing produced a result equivalent to that of the control condition. A combination of these lasers with adapalene produced a wider repair zone than the lasers alone, but the combination produced a result equivalent to that of adapalene alone. Retinoids are potent stimuli for neocollagen formation. The 585-nm or CO(2) laser alone did not induce more neocollagen than the control condition. In addition, no synergistic effect was observed with the combination treatments. The repair zone of the combination treatment is mainly attributable to adapalene.
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
Larouche, Danielle; Cantin-Warren, Laurence; Desgagné, Maxime; Guignard, Rina; Martel, Israël; Ayoub, Akram; Lavoie, Amélie; Gauvin, Robert; Auger, François A.; Moulin, Véronique J.; Germain, Lucie
2016-01-01
Abstract There is a clinical need for skin substitutes to replace full-thickness skin loss. Our group has developed a bilayered skin substitute produced from the patient's own fibroblasts and keratinocytes referred to as Self-Assembled Skin Substitute (SASS). After cell isolation and expansion, the current time required to produce SASS is 45 days. We aimed to optimize the manufacturing process to standardize the production of SASS and to reduce production time. The new approach consisted in seeding keratinocytes on a fibroblast-derived tissue sheet before its detachment from the culture plate. Four days following keratinocyte seeding, the resulting tissue was stacked on two fibroblast-derived tissue sheets and cultured at the air–liquid interface for 10 days. The resulting total production time was 31 days. An alternative method adapted to more contractile fibroblasts was also developed. It consisted in adding a peripheral frame before seeding fibroblasts in the culture plate. SASSs produced by both new methods shared similar histology, contractile behavior in vitro and in vivo evolution after grafting onto mice when compared with SASSs produced by the 45-day standard method. In conclusion, the new approach for the production of high-quality human skin substitutes should allow an earlier autologous grafting for the treatment of severely burned patients. PMID:27872793
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Validation of the ANSR Listeria method for detection of Listeria spp. in environmental samples.
Wendorf, Michael; Feldpausch, Emily; Pinkava, Lisa; Luplow, Karen; Hosking, Edan; Norton, Paul; Biswas, Preetha; Mozola, Mark; Rice, Jennifer
2013-01-01
ANSR Listeria is a new diagnostic assay for detection of Listeria spp. in sponge or swab samples taken from a variety of environmental surfaces. The method is an isothermal nucleic acid amplification assay based on the nicking enzyme amplification reaction technology. Following single-step sample enrichment for 16-24 h, the assay is completed in 40 min, requiring only simple instrumentation. In inclusivity testing, 48 of 51 Listeria strains tested positive, with only the three strains of L. grayi producing negative results. Further investigation showed that L. grayi is reactive in the ANSR assay, but its ability to grow under the selective enrichment conditions used in the method is variable. In exclusivity testing, 32 species of non-Listeria, Gram-positive bacteria all produced negative ANSR assay results. Performance of the ANSR method was compared to that of the U.S. Department of Agriculture-Food Safety and Inspection Service reference culture procedure for detection of Listeria spp. in sponge or swab samples taken from inoculated stainless steel, plastic, ceramic tile, sealed concrete, and rubber surfaces. Data were analyzed using Chi-square and probability of detection models. Only one surface, stainless steel, showed a significant difference in performance between the methods, with the ANSR method producing more positive results. Results of internal trials were supported by findings from independent laboratory testing. The ANSR Listeria method can be used as an accurate, rapid, and simple alternative to standard culture methods for detection of Listeria spp. in environmental samples.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Pugazhendhi, Sugandhi; Dorairaj, Arvind Prasanth
Diabetic patients are more prone to the development of foot ulcers, because their underlying tissues are exposed to colonization by various pathogenic organisms. Hence, biofilm formation plays a vital role in disease progression by antibiotic resistance to the pathogen found in foot infections. The present study has demonstrated the correlation of biofilm assay with the clinical characteristics of diabetic foot infection. The clinical characteristics such as the ulcer duration, size, nature, and grade were associated with biofilm production. Our results suggest that as the size of the ulcer with poor glycemic control increased, the organism was more likely to be positive for biofilm formation. A high-degree of antibiotic resistance was exhibited by the biofilm-producing gram-positive isolates for erythromycin and gram-negative isolates for cefpodoxime. Comparisons of biofilm production using 3 different conventional methods were performed. The strong producers with the tube adherence method were able to produce biofilm using the cover slip assay method, and the weak producers in tube adherence method had difficulty in producing biofilm using the other 2 methods, indicating that the tube adherence method is the best method for assessing biofilm formation. The strong production of biofilm with the conventional method was further confirmed by scanning electron microscopy analysis, because bacteria attached as a distinct layer of biofilm. Thus, the high degree of antibiotic resistance was exhibited by biofilm producers compared with nonbiofilm producers. The tube adherence and cover slip assay were found to be the better method for biofilm evaluation. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Czarnecki, S.; Williams, S.
2017-12-01
The accuracy of a method for measuring the effective atomic numbers of minerals using bremsstrahlung intensities has been investigated. The method is independent of detector-efficiency and maximum accelerating voltage. In order to test the method, experiments were performed which involved low-energy electrons incident on thick malachite, pyrite, and galena targets. The resultant thick-target bremsstrahlung was compared to bremsstrahlung produced using a standard target, and experimental effective atomic numbers were calculated using data from a previous study (in which the Z-dependence of thick-target bremsstrahlung was studied). Comparisons of the results to theoretical values suggest that the method has potential for implementation in energy-dispersive X-ray spectroscopy systems.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
Producer-level benefits of sustainability certification.
Blackman, Allen; Rivera, Jorge
2011-12-01
Initiatives certifying that producers of goods and services adhere to defined environmental and social-welfare production standards are increasingly popular. According to proponents, these initiatives create financial incentives for producers to improve their environmental, social, and economic performance. We reviewed the evidence on whether these initiatives have such benefits. We identified peer-reviewed, ex post, producer-level studies in economic sectors in which certification is particularly prevalent (bananas, coffee, fish products, forest products, and tourism operations), classified these studies on the basis of whether their design and methods likely generated credible results, summarized findings from the studies with credible results, and considered how these findings might guide future research. We found 46 relevant studies, most of which focused on coffee and forest products and examined fair-trade and Forest Stewardship Council certification. The methods used in 11 studies likely generated credible results. Of these 11 studies, nine examined the economic effects and two the environmental effects of certification. The results of four of the 11 studies, all of which examined economic effects, showed that certification has producer-level benefits. Hence, the evidence to support the hypothesis that certification benefits the environment or producers is limited. More evidence could be generated by incorporating rigorous, independent evaluation into the design and implementation of projects promoting certification. ©2011 Society for Conservation Biology.
Schulze, Katja; Lang, Imke; Enke, Heike; Grohme, Diana; Frohme, Marcus
2015-04-17
Ethanol production via genetically engineered cyanobacteria is a promising solution for the production of biofuels. Through the introduction of a pyruvate decarboxylase and alcohol dehydrogenase direct ethanol production becomes possible within the cells. However, during cultivation genetic instability can lead to mutations and thus loss of ethanol production. Cells then revert back to the wild type phenotype. A method for a rapid and simple detection of these non-producing revertant cells in an ethanol producing cell population is an important quality control measure in order to predict genetic stability and the longevity of a producing culture. Several comparable cultivation experiments revealed a difference in the pigmentation for non-producing and producing cells: the accessory pigment phycocyanin (PC) is reduced in case of the ethanol producer, resulting in a yellowish appearance of the culture. Microarray and western blot studies of Synechocystis sp. PCC6803 and Synechococcus sp. PCC7002 confirmed this PC reduction on the level of RNA and protein. Based on these findings we developed a method for fluorescence microscopy in order to distinguish producing and non-producing cells with respect to their pigmentation phenotype. By applying a specific filter set the emitted fluorescence of a producer cell with a reduced PC content appeared orange. The emitted fluorescence of a non-producing cell with a wt pigmentation phenotype was detected in red, and dead cells in green. In an automated process multiple images of each sample were taken and analyzed with a plugin for the image analysis software ImageJ to identify dead (green), non-producing (red) and producing (orange) cells. The results of the presented validation experiments revealed a good identification with 98 % red cells in the wt sample and 90 % orange cells in the producer sample. The detected wt pigmentation phenotype (red cells) in the producer sample were either not fully induced yet (in 48 h induced cultures) or already reverted to a non-producing cells (in long-term photobioreactor cultivations), emphasizing the sensitivity and resolution of the method. The fluorescence microscopy method displays a useful technique for a rapid detection of non-producing single cells in an ethanol producing cell population.
Method of Curved Models and Its Application to the Study of Curvilinear Flight of Airships. Part II
NASA Technical Reports Server (NTRS)
Gourjienko, G A
1937-01-01
This report compares the results obtained by the aid of curved models with the results of tests made by the method of damped oscillations, and with flight tests. Consequently we shall be able to judge which method of testing in the tunnel produces results that are in closer agreement with flight test results.
USDA-ARS?s Scientific Manuscript database
Aim: Identification of exopolysaccharide (EPS)-producing lactobacilli as EPS production is potentially a very important trait among probiotic lactobacilli from technological and health promoting perspectives. Methods and Results: Characterization of EPS-producing Lactobacillus mucosae DPC 6426 in de...
Saito, Ryoichi; Koyano, Saho; Dorin, Misato; Higurashi, Yoshimi; Misawa, Yoshiki; Nagano, Noriyuki; Kaneko, Takamasa; Moriya, Kyoji
2015-01-01
We investigated the performance of a phenotypic test, the Carbapenemase Detection Set (MAST-CDS), for the identification of carbapenemase-producing Enterobacteriaceae. Our results indicated that MAST-CDS is rapid, easily performed, simple to interpret, and highly sensitive for the identification of carbapenemase producers, particularly imipenemase producers. Copyright © 2014 Elsevier B.V. All rights reserved.
Methods for gas detection using stationary hyperspectral imaging sensors
Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA
2012-04-24
According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.
NASA Astrophysics Data System (ADS)
Yuliasmi, S.; Pardede, T. R.; Nerdy; Syahputra, H.
2017-03-01
Oil palm midrib is one of the waste generated by palm plants containing 34.89% cellulose. Cellulose has the potential to produce microcrystalline cellulose can be used as an excipient in tablet formulations by direct compression. Microcrystalline cellulose is the result of a controlled hydrolysis of alpha cellulose, so the alpha cellulose extraction process of oil palm midrib greatly affect the quality of the resulting microcrystalline cellulose. The purpose of this study was to compare the microcrystalline cellulose produced from alpha cellulose extracted from oil palm midrib by two different methods. Fisrt delignization method uses sodium hydroxide. Second method uses a mixture of nitric acid and sodium nitrite, and continued with sodium hydroxide and sodium sulfite. Microcrystalline cellulose obtained by both method was characterized separately, including organoleptic test, color reagents test, dissolution test, pH test and determination of functional groups by FTIR. The results was compared with microcrystalline cellulose which has been available on the market. The characterization results showed that microcrystalline cellulose obtained by first method has the most similar characteristics to the microcrystalline cellulose available in the market.
Method for producing tetraphenoxide-n-aniloxy cyclo phosphazotriene
NASA Technical Reports Server (NTRS)
Khofbauer, Y. I.; Kolesnikov, V. G.
1986-01-01
A method for producing tetraphenoxide-n-aniloxy cyclophosphazotriene, distinguished by the fact that tetraphenoxide dichlorocyclophosphazotriene is processed with an alkaline metal n-acetamidophenolate in an organic solvent, for example pyridine, during heating, after which the resulting compound is saponified in the present of a mineral or organic acid, for example, hydrochloric, and the desired product separated by well known techniques.
Woksepp, Hanna; Jernberg, Cecilia; Tärnberg, Maria; Ryberg, Anna; Brolund, Alma; Nordvall, Michaela; Olsson-Liljequist, Barbro; Wisell, Karin Tegmark; Monstein, Hans-Jürg; Nilsson, Lennart E.; Schön, Thomas
2011-01-01
Methods for the confirmation of nosocomial outbreaks of bacterial pathogens are complex, expensive, and time-consuming. Recently, a method based on ligation-mediated PCR (LM/PCR) using a low denaturation temperature which produces specific melting-profile patterns of DNA products has been described. Our objective was to further develop this method for real-time PCR and high-resolution melting analysis (HRM) in a single-tube system optimized in order to achieve results within 1 day. Following the optimization of LM/PCR for real-time PCR and HRM (LM/HRM), the method was applied for a nosocomial outbreak of extended-spectrum-beta-lactamase (ESBL)-producing and ST131-associated Escherichia coli isolates (n = 15) and control isolates (n = 29), including four previous clusters. The results from LM/HRM were compared to results from pulsed-field gel electrophoresis (PFGE), which served as the gold standard. All isolates from the nosocomial outbreak clustered by LM/HRM, which was confirmed by gel electrophoresis of the LM/PCR products and PFGE. Control isolates that clustered by LM/PCR (n = 4) but not by PFGE were resolved by confirmatory gel electrophoresis. We conclude that LM/HRM is a rapid method for the detection of nosocomial outbreaks of bacterial infections caused by ESBL-producing E. coli strains. It allows the analysis of isolates in a single-tube system within a day, and the discriminatory power is comparable to that of PFGE. PMID:21956981
Woksepp, Hanna; Jernberg, Cecilia; Tärnberg, Maria; Ryberg, Anna; Brolund, Alma; Nordvall, Michaela; Olsson-Liljequist, Barbro; Wisell, Karin Tegmark; Monstein, Hans-Jürg; Nilsson, Lennart E; Schön, Thomas
2011-12-01
Methods for the confirmation of nosocomial outbreaks of bacterial pathogens are complex, expensive, and time-consuming. Recently, a method based on ligation-mediated PCR (LM/PCR) using a low denaturation temperature which produces specific melting-profile patterns of DNA products has been described. Our objective was to further develop this method for real-time PCR and high-resolution melting analysis (HRM) in a single-tube system optimized in order to achieve results within 1 day. Following the optimization of LM/PCR for real-time PCR and HRM (LM/HRM), the method was applied for a nosocomial outbreak of extended-spectrum-beta-lactamase (ESBL)-producing and ST131-associated Escherichia coli isolates (n = 15) and control isolates (n = 29), including four previous clusters. The results from LM/HRM were compared to results from pulsed-field gel electrophoresis (PFGE), which served as the gold standard. All isolates from the nosocomial outbreak clustered by LM/HRM, which was confirmed by gel electrophoresis of the LM/PCR products and PFGE. Control isolates that clustered by LM/PCR (n = 4) but not by PFGE were resolved by confirmatory gel electrophoresis. We conclude that LM/HRM is a rapid method for the detection of nosocomial outbreaks of bacterial infections caused by ESBL-producing E. coli strains. It allows the analysis of isolates in a single-tube system within a day, and the discriminatory power is comparable to that of PFGE.
Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
A novel method for producing microspheres with semipermeable polymer membranes
NASA Technical Reports Server (NTRS)
Lin, K. C.; Wang, Taylor G.
1992-01-01
A new and systematic approach for producing polymer microspheres has been demonstrated. The membrane of the microsphere is formed by immersing the polyanionic droplet into a collapsing annular sheet, which is made of another polycation polymer solution. This method minimizes the impact force during the time when the chemical reaction takes place, hence eliminating the shortcomings of the current encapsulation techniques. The results of this study show the feasibility of this method for mass production of microcapsules.
Chiu, Yi-Ting; Yen, Hung-Kai; Lin, Tsair-Fuh
2016-11-01
2-Methylisoborneol (2-MIB) is a commonly detected cyanobacterial odorant in drinking water sources in many countries. To provide safe and high-quality water, development of a monitoring method for the detection of 2-MIB-synthesis (mibC) genes is very important. In this study, new primers MIBS02F/R intended specifically for the mibC gene were developed and tested. Experimental results show that the MIBS02F/R primer set was able to capture 13 2-MIB producing cyanobacterial strains grown in the laboratory, and to effectively amplify the targeted DNA region from 17 2-MIB-producing cyanobacterial strains listed in the literature. The primers were further coupled with a TaqMan probe to detect 2-MIB producers in 29 drinking water reservoirs (DWRs). The results showed statistically significant correlations between mibC genes and 2-MIB concentrations for the data from each reservoir (R 2 =0.413-0.998; p<0.05), from all reservoirs in each of the three islands (R 2 =0.302-0.796; p<0.01), and from all data of the three islands (R 2 =0.473-0.479; p<0.01). The results demonstrate that the real-time PCR can be an alternative method to provide information to managers of reservoirs and water utilities facing 2-MIB-related incidents. Copyright © 2016 Elsevier Inc. All rights reserved.
Tresadern, Gary; Agrafiotis, Dimitris K
2009-12-01
Stochastic proximity embedding (SPE) and self-organizing superimposition (SOS) are two recently introduced methods for conformational sampling that have shown great promise in several application domains. Our previous validation studies aimed at exploring the limits of these methods and have involved rather exhaustive conformational searches producing a large number of conformations. However, from a practical point of view, such searches have become the exception rather than the norm. The increasing popularity of virtual screening has created a need for 3D conformational search methods that produce meaningful answers in a relatively short period of time and work effectively on a large scale. In this work, we examine the performance of these algorithms and the effects of different parameter settings at varying levels of sampling. Our goal is to identify search protocols that can produce a diverse set of chemically sensible conformations and have a reasonable probability of sampling biologically active space within a small number of trials. Our results suggest that both SPE and SOS are extremely competitive in this regard and produce very satisfactory results with as few as 500 conformations per molecule. The results improve even further when the raw conformations are minimized with a molecular mechanics force field to remove minor imperfections and any residual strain. These findings provide additional evidence that these methods are suitable for many everyday modeling tasks, both high- and low-throughput.
Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.
2014-01-01
Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.
Social network extraction based on Web: 3. the integrated superficial method
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, O. S.; Noah, S. A.
2018-03-01
The Web as a source of information has become part of the social behavior information. Although, by involving only the limitation of information disclosed by search engines in the form of: hit counts, snippets, and URL addresses of web pages, the integrated extraction method produces a social network not only trusted but enriched. Unintegrated extraction methods may produce social networks without explanation, resulting in poor supplemental information, or resulting in a social network of durmise laden, consequently unrepresentative social structures. The integrated superficial method in addition to generating the core social network, also generates an expanded network so as to reach the scope of relation clues, or number of edges computationally almost similar to n(n - 1)/2 for n social actors.
Method for excluding salt and other soluble materials from produced water
Phelps, Tommy J [Knoxville, TN; Tsouris, Costas [Oak Ridge, TN; Palumbo, Anthony V [Oak Ridge, TN; Riestenberg, David E [Knoxville, TN; McCallum, Scott D [Knoxville, TN
2009-08-04
A method for reducing the salinity, as well as the hydrocarbon concentration of produced water to levels sufficient to meet surface water discharge standards. Pressure vessel and coflow injection technology developed at the Oak Ridge National Laboratory is used to mix produced water and a gas hydrate forming fluid to form a solid or semi-solid gas hydrate mixture. Salts and solids are excluded from the water that becomes a part of the hydrate cage. A three-step process of dissociation of the hydrate results in purified water suitable for irrigation.
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
NASA Astrophysics Data System (ADS)
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilpatrick, Brian M.; Tucker, Gregory S.; Lewis, Nikole K.
2017-01-01
We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for themore » intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.« less
NASA Astrophysics Data System (ADS)
Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.
2017-01-01
We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.
Disconfirmed hedonic expectations produce perceptual contrast, not assimilation.
Zellner, Debra A; Strickhouser, Dinah; Tornow, Carina E
2004-01-01
In studies of hedonic ratings, contrast is the usual result when expectations about test stimuli are produced through the presentation of context stimuli, whereas assimilation is the usual result when expectations about test stimuli are produced through labeling, advertising, or the relaying of information to the subject about the test stimuli. Both procedures produce expectations that are subsequently violated, but the outcomes are different. The present studies demonstrate that both assimilation and contrast can occur even when expectations are produced by verbal labels and the degree of violation of the expectation is held constant. One factor determining whether assimilation or contrast occurs appears to be the certainty of the expectation. Expectations that convey certainty are produced by methods that lead to social influence on subjects' ratings, producing assimilation. When social influence is not a factor and subjects give judgments influenced only by the perceived hedonic value of the stimulus, contrast is the result.
Bischel, William K. [Menlo Park, CA; Jacobs, Ralph R. [Livermore, CA; Prosnitz, Donald [Hamden, CT; Rhodes, Charles K. [Palo Alto, CA; Kelly, Patrick J. [Fort Lewis, WA
1979-02-20
Method and apparatus for producing laser radiation by two-photon optical pumping of an atomic or molecular gaseous medium and subsequent lasing action. A population inversion is created as a result of two-photon absorption of the gaseous species. Stark tuning is utilized, if necessary, in order to tune the two-photon transition into exact resonance. In particular, gaseous ammonia (NH.sub.3) or methyl fluoride (CH.sub.3 F) is optically pumped by a pair of CO.sub.2 lasers to create a population inversion resulting from simultaneous two-photon excitation of a high-lying vibrational state, and laser radiation is produced by stimulated emission of coherent radiation from the inverted level.
Bischel, W.K.; Jacobs, R.R.; Prosnitz, D.P.; Rhodes, C.K.; Kelly, P.J.
1979-02-20
Method and apparatus are disclosed for producing laser radiation by two-photon optical pumping of an atomic or molecular gaseous medium and subsequent lasing action. A population inversion is created as a result of two-photon absorption of the gaseous species. Stark tuning is utilized, if necessary, in order to tune the two-photon transition into exact resonance. In particular, gaseous ammonia (NH[sub 3]) or methyl fluoride (CH[sub 3]F) is optically pumped by a pair of CO[sub 2] lasers to create a population inversion resulting from simultaneous two-photon excitation of a high-lying vibrational state, and laser radiation is produced by stimulated emission of coherent radiation from the inverted level. 3 figs.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Aluminum transfer method for plating plastics
NASA Technical Reports Server (NTRS)
Goodrich, W. D.; Stalmach, C. J., Jr.
1977-01-01
Electroless plating technique produces plate of uniform thickness. Hardness and abrasion resistance can be increased further by heat treatment. Method results in seamless coating over many materials, has low thermal conductivity, and is relatively inexpensive compared to conventional methods.
Validation and implementation of a novel high-throughput behavioral phenotyping instrument for mice
Brodkin, Jesse; Frank, Dana; Grippo, Ryan; Hausfater, Michal; Gulinello, Maria; Achterholt, Nils; Gutzen, Christian
2015-01-01
Background Behavioral assessment of mutant mouse models and novel candidate drugs is a slow and labor intensive process. This limitation produces a significant impediment to CNS drug discovery. New method By combining video and vibration analysis we created an automated system that provides the most detailed description of mouse behavior available. Our system (The Behavioral Spectrometer) allowed for the rapid assessment of behavioral abnormalities in the BTBR model of Autism, the restraint model of stress and the irritant model of inflammatory pain. Results We found that each model produced a unique alteration of the spectrum of behavior emitted by the mice. BTBR mice engaged in more grooming and less rearing behaviors. Prior restraint stress produced dramatic increases in grooming activity at the expense of locomotor behavior. Pain produced profound decreases in emitted behavior that were reversible with analgesic treatment. Comparison with existing method(s) We evaluated our system through a direct comparison on the same subjects with the current “gold standard” of human observation of video recordings. Using the same mice evaluated over the same range of behaviors, the Behavioral Spectrometer produced a quantitative categorization of behavior that was highly correlated with the scores produced by trained human observers (r=0.97). Conclusions Our results show that this new system is a highly valid and sensitive method to characterize behavioral effects in mice. As a fully automated and easily scalable instrument the Behavioral Spectrometer represents a high-throughput behavioral tool that reduces the time and labor involved in behavioral research. PMID:24384067
Production and assessment of red alder planting stock.
M.A. Radwan; Y. Tanaka; A. Dobkowskl; W. Fangen
1992-01-01
A series of experiments was conducted over 4 years to test and develop methods to produce acceptable red alder planting stock and to assess quality and outplanting performance of resulting stock. Results indicated that red alder planting stock can be produced as containerized seedlings (plugs) or as bare-root nontransplant and transplant trees. In general, bare-root...
Glycemic index of cereals and tubers produced in China
Yang, Yue-Xin; Wang, Hong-Wei; Cui, Hong-Mei; Wang, Yan; Yu, Lian-Da; Xiang, Shi-Xue; Zhou, Shui-Ying
2006-01-01
AIM: To determine the GI of some cereals and tubers produced in China in an effort to establish the database of glycemic index (GI) of Chinese food. METHODS: Food containing 50 g carbohydrate was consumed by 8-12 healthy adults after they have been fasted for 10 h and blood glucose was monitored for 2 h. Glucose was used as reference food. GI of food was calculated according to a standard method. RESULTS: GI of 9 types of sugar and 60 kinds of food were determined. CONCLUSION: Food GI is mainly determined by nature of carbohydrate and procession. Most of cereals and tubers produced in China have similar GI with their counterparts produced in other countries. PMID:16733864
Graphical method for comparative statistical study of vaccine potency tests.
Pay, T W; Hingley, P J
1984-03-01
Producers and consumers are interested in some of the intrinsic characteristics of vaccine potency assays for the comparative evaluation of suitable experimental design. A graphical method is developed which represents the precision of test results, the sensitivity of such results to changes in dosage, and the relevance of the results in the way they reflect the protection afforded in the host species. The graphs can be constructed from Producer's scores and Consumer's scores on each of the scales of test score, antigen dose and probability of protection against disease. A method for calculating these scores is suggested and illustrated for single and multiple component vaccines, for tests which do or do not employ a standard reference preparation, and for tests which employ quantitative or quantal systems of scoring.
NASA Astrophysics Data System (ADS)
Nakhjavani, Maryam; Nikkhah, V.; Sarafraz, M. M.; Shoja, Saeed; Sarafraz, Marzieh
2017-10-01
In this paper, silver nanoparticles are produced via green synthesis method using green tea leaves. The introduced method is cost-effective and available, which provides condition to manipulate and control the average nanoparticle size. The produced particles were characterized using x-ray diffraction, scanning electron microscopic images, UV visualization, digital light scattering, zeta potential measurement and thermal conductivity measurement. Results demonstrated that the produced samples of silver nanoparticles are pure in structure (based on the x-ray diffraction test), almost identical in terms of morphology (spherical and to some extent cubic) and show longer stability when dispersed in deionized water. The UV-visualization showed a peak in 450 nm, which is in accordance with the previous studies reported in the literature. Results also showed that small particles have higher thermal and antimicrobial performance. As green tea leaves are used for extracting the silver nanoparticles, the method is eco-friendly. The thermal behaviour of silver nanoparticle was also analysed by dispersing the nanoparticles inside the deionized water. Results showed that thermal conductivity of the silver nano-fluid is higher than that of obtained for the deionized water. Activity of Ag nanoparticles against some bacteria was also examined to find the suitable antibacterial application for the produced particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Samuel A.; Pajer, Gary A.; Paluszek, Michael A.
A system and method for producing and controlling high thrust and desirable specific impulse from a continuous fusion reaction is disclosed. The resultant relatively small rocket engine will have lower cost to develop, test, and operate that the prior art, allowing spacecraft missions throughout the planetary system and beyond. The rocket engine method and system includes a reactor chamber and a heating system for heating a stable plasma to produce fusion reactions in the stable plasma. Magnets produce a magnetic field that confines the stable plasma. A fuel injection system and a propellant injection system are included. The propellant injectionmore » system injects cold propellant into a gas box at one end of the reactor chamber, where the propellant is ionized into a plasma. The propellant and fusion products are directed out of the reactor chamber through a magnetic nozzle and are detached from the magnetic field lines producing thrust.« less
Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.
2013-01-01
Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345
Fatigue loading history reconstruction based on the rain-flow technique
NASA Technical Reports Server (NTRS)
Khosrovaneh, A. K.; Dowling, N. E.
1989-01-01
Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, M.D.; Pierce, B.L.
This report presents results of tests of different final site selection methods used for siting large-scale facilities such as nuclear power plants. Test data are adapted from a nuclear power plant siting study conducted on Long Island, New York. The purpose of the tests is to determine whether or not different final site selection methods produce different results, and to obtain some understanding of the nature of any differences found. Decision rules and weighting methods are included. Decision rules tested are Weighting Summation, Power Law, Decision Analysis, Goal Programming, and Goal Attainment; weighting methods tested are Categorization, Ranking, Rating Ratiomore » Estimation, Metfessel Allocation, Indifferent Tradeoff, Decision Analysis lottery, and Global Evaluation. Results show that different methods can, indeed, produce different results, but that the probability that they will do so is controlled by the structure of differences among the sites being evaluated. Differences in weights and suitability scores attributable to methods have reduced significance if the alternatives include one or two sites that are superior to all others in many attributes. The more tradeoffs there are among good and bad levels of different attributes at different sites, the more important are the specifics of methods to the final decision. 5 refs., 14 figs., 19 tabs.« less
Method for producing a compressed body of mix-powder for ceramic
NASA Technical Reports Server (NTRS)
Okawa, K.
1983-01-01
Under the invented method, a compressed body of mix powder for ceramic is produced by mixing and stirring several raw powder materials with mixing liquid such as water, and, in the process of sending the resulted viscous material pressurized at 5 kg/cm to 7 kg/cm, using 1.5 to 2 times the pressure to filter and dehydrate, adjusting the water content to 10 to 20%.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
Viscoacoustic anisotropic full waveform inversion
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli
2017-01-01
A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Production of radionuclide molybdenum 99 in a distributed and in situ fashion
Gentile, Charles A.; Cohen, Adam B.; Ascione, George
2016-04-19
A method and apparatus for producing Mo-99 from Mo-100 for the use of the produced Mo-99 in a Tc-99m generator without the use of uranium is presented. Both the method and apparatus employ high energy gamma rays for the transformation of Mo-100 to Mo-99. The high energy gamma rays are produced by exposing a metal target to a moderated neutron output of between 6 MeV and 14 MeV. The resulting Mo-99 spontaneously decays into Tc-99m and can therefore be used in a Tc-99m generator.
Method for producing microcomposite powders using a soap solution
Maginnis, Michael A.; Robinson, David A.
1996-01-01
A method for producing microcomposite powders for use in superconducting and non-superconducting applications. A particular method to produce microcomposite powders for use in superconducting applications includes the steps of: (a) preparing a solution including ammonium soap; (b) dissolving a preselected amount of a soluble metallic such as silver nitrate in the solution including ammonium soap to form a first solution; (c) adding a primary phase material such as a single phase YBC superconducting material in particle form to the first solution; (d) preparing a second solution formed from a mixture of a weak acid and an alkyl-mono-ether; (e) adding the second solution to the first solution to form a resultant mixture; (f) allowing the resultant mixture to set until the resultant mixture begins to cloud and thicken into a gel precipitating around individual particles of the primary phase material; (g) thereafter drying the resultant mixture to form a YBC superconducting material/silver nitrate precursor powder; and (h) calcining the YBC superconducting material/silver nitrate precursor powder to convert the silver nitrate to silver and thereby form a YBC/silver microcomposite powder wherein the silver is substantially uniformly dispersed in the matrix of the YBC material.
Porous alumina scaffold produced by sol-gel combined polymeric sponge method
NASA Astrophysics Data System (ADS)
Hasmaliza, M.; Fazliah, M. N.; Shafinaz, R. J.
2012-09-01
Sol gel is a novel method used to produce high purity alumina with nanometric scale. In this study, three-dimensional porous alumina scaffold was produced using sol-gel polymeric sponge method. Briefly, sol gel alumina was prepared by evaporation and polymeric sponge cut to designated sizes were immersed in the sol gel followed by sintering at 1250 and 1550°C. In order to study the cell interaction, the porous alumina scaffold was sterilized using autoclave prior to Human Mesenchymal Stem Cells (HMSCs) seeding on the scaffold and the cell proliferation was assessed by alamarBlue® assay. SEM results showed that during the 21 day period, HMSCs were able to attach on the scaffold surface and the interconnecting pores while maintaining its proliferation. These findings suggested the potential use of the porous alumina produced as a scaffold for implantation procedure.
Controllable reductive method for synthesizing metal-containing particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Ji-Won; Jung, Hyunsung; Phelps, Tommy Joe
The invention is directed to a method for producing metal-containing particles, the method comprising subjecting an aqueous solution comprising a metal salt, E.sub.h, lowering reducing agent, pH adjusting agent, and water to conditions that maintain the E.sub.h value of the solution within the bounds of an E.sub.h-pH stability field corresponding to the composition of the metal-containing particles to be produced, and producing said metal-containing particles in said aqueous solution at a selected E.sub.h value within the bounds of said E.sub.h-pH stability field. The invention is also directed to the resulting metal-containing particles as well as devices in which they aremore » incorporated.« less
Optical limiting device and method of preparation thereof
Wang, Hsing-Lin; Xu, Su; McBranch, Duncan W.
2003-01-01
Optical limiting device and method of preparation thereof. The optical limiting device includes a transparent substrate and at least one homogeneous layer of an RSA material in polyvinylbutyral attached to the substrate. The device may be produced by preparing a solution of an RSA material, preferably a metallophthalocyanine complex, and a solution of polyvinylbutyral, and then mixing the two solutions together to remove air bubbles. The resulting solution is layered onto the substrate and the solvent is evaporated. The method can be used to produce a dual tandem optical limiting device.
Estimating costs and performance of systems for machine processing of remotely sensed data
NASA Technical Reports Server (NTRS)
Ballard, R. J.; Eastwood, L. F., Jr.
1977-01-01
This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.
Gross domestic product estimation based on electricity utilization by artificial neural network
NASA Astrophysics Data System (ADS)
Stevanović, Mirjana; Vujičić, Slađana; Gajić, Aleksandar M.
2018-01-01
The main goal of the paper was to estimate gross domestic product (GDP) based on electricity estimation by artificial neural network (ANN). The electricity utilization was analyzed based on different sources like renewable, coal and nuclear sources. The ANN network was trained with two training algorithms namely extreme learning method and back-propagation algorithm in order to produce the best prediction results of the GDP. According to the results it can be concluded that the ANN model with extreme learning method could produce the acceptable prediction of the GDP based on the electricity utilization.
A Comparison of Component and Factor Patterns: A Monte Carlo Approach.
ERIC Educational Resources Information Center
Velicer, Wayne F.; And Others
1982-01-01
Factor analysis, image analysis, and principal component analysis are compared with respect to the factor patterns they would produce under various conditions. The general conclusion that is reached is that the three methods produce results that are equivalent. (Author/JKS)
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
Methods for producing complex films, and films produced thereby
Duty, Chad E.; Bennett, Charlee J. C.; Moon, Ji -Won; Phelps, Tommy J.; Blue, Craig A.; Dai, Quanqin; Hu, Michael Z.; Ivanov, Ilia N.; Jellison, Jr., Gerald E.; Love, Lonnie J.; Ott, Ronald D.; Parish, Chad M.; Walker, Steven
2015-11-24
A method for producing a film, the method comprising melting a layer of precursor particles on a substrate until at least a portion of the melted particles are planarized and merged to produce the film. The invention is also directed to a method for producing a photovoltaic film, the method comprising depositing particles having a photovoltaic or other property onto a substrate, and affixing the particles to the substrate, wherein the particles may or may not be subsequently melted. Also described herein are films produced by these methods, methods for producing a patterned film on a substrate, and methods for producing a multilayer structure.
Brown, Bryan N; Freund, John M; Han, Li; Rubin, J Peter; Reing, Janet E; Jeffries, Eric M; Wolf, Mathew T; Tottey, Stephen; Barnes, Christopher A; Ratner, Buddy D; Badylak, Stephen F
2011-04-01
Extracellular matrix (ECM)-based scaffold materials have been used successfully in both preclinical and clinical tissue engineering and regenerative medicine approaches to tissue reconstruction. Results of numerous studies have shown that ECM scaffolds are capable of supporting the growth and differentiation of multiple cell types in vitro and of acting as inductive templates for constructive tissue remodeling after implantation in vivo. Adipose tissue represents a potentially abundant source of ECM and may represent an ideal substrate for the growth and adipogenic differentiation of stem cells harvested from this tissue. Numerous studies have shown that the methods by which ECM scaffold materials are prepared have a dramatic effect upon both the biochemical and structural properties of the resultant ECM scaffold material as well as the ability of the material to support a positive tissue remodeling outcome after implantation. The objective of the present study was to characterize the adipose ECM material resulting from three methods of decellularization to determine the most effective method for the derivation of an adipose tissue ECM scaffold that was largely free of potentially immunogenic cellular content while retaining tissue-specific structural and functional components as well as the ability to support the growth and adipogenic differentiation of adipose-derived stem cells. The results show that each of the decellularization methods produced an adipose ECM scaffold that was distinct from both a structural and biochemical perspective, emphasizing the importance of the decellularization protocol used to produce adipose ECM scaffolds. Further, the results suggest that the adipose ECM scaffolds produced using the methods described herein are capable of supporting the maintenance and adipogenic differentiation of adipose-derived stem cells and may represent effective substrates for use in tissue engineering and regenerative medicine approaches to soft tissue reconstruction.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
NASA Astrophysics Data System (ADS)
Sun, Pei; Fang, Z. Zak; Zhang, Ying; Xia, Yang
2017-12-01
Commercial spherical Ti powders for additive manufacturing applications are produced today by melt-atomization methods at relatively high costs. A meltless production method, called granulation-sintering-deoxygenation (GSD), was developed recently to produce spherical Ti alloy powder at a significantly reduced cost. In this new process, fine hydrogenated Ti particles are agglomerated to form spherical granules, which are then sintered to dense spherical particles. After sintering, the solid fully dense spherical Ti alloy particles are deoxygenated using novel low-temperature deoxygenation processes with either Mg or Ca. This technical communication presents results of 3D printing using GSD powder and the selective laser melting (SLM) technique. The results showed that tensile properties of parts fabricated from spherical GSD Ti-6Al-4V powder by SLM are comparable with typical mill-annealed Ti-6Al-4V. The characteristics of 3D printed Ti-6Al-4V from GSD powder are also compared with that of commercial materials.
A demonstration of the antimicrobial effectiveness of various copper surfaces
2013-01-01
Background Bacterial contamination on touch surfaces results in increased risk of infection. In the last few decades, work has been done on the antimicrobial properties of copper and its alloys against a range of micro-organisms threatening public health in food processing, healthcare and air conditioning applications; however, an optimum copper method of surface deposition and mass structure has not been identified. Results A proof-of-concept study of the disinfection effectiveness of three copper surfaces was performed. The surfaces were produced by the deposition of copper using three methods of thermal spray, namely, plasma spray, wire arc spray and cold spray The surfaces were then inoculated with meticillin-resistant Staphylococcus aureus (MRSA). After a two hour exposure to the surfaces, the surviving MRSA were assayed and the results compared. The differences in the copper depositions produced by the three thermal spray methods were examined in order to explain the mechanism that causes the observed differences in MRSA killing efficiencies. The cold spray deposition method was significantly more effective than the other methods. It was determined that work hardening caused by the high velocity particle impacts created by the cold spray technique results in a copper microstructure that enhances ionic diffusion, and copper ions are principally responsible for antimicrobial activity. Conclusions This test showed significant microbiologic differences between coatings produced by different spray techniques and demonstrates the importance of the copper application technique. The cold spray technique shows superior anti-microbial effectiveness caused by the high impact velocity imparted to the sprayed particles which results in high dislocation density and high ionic diffusivity. PMID:23537176
Carbon nanotube: the inside story.
Ando, Yoshinori
2010-06-01
Carbon nanotubes (CNTs) were serendipitously discovered as a byproduct of fullerenes by direct current (DC) arc discharge; and today this is the most-wanted material in the nanotechnology research. In this brief review, I begin with the history of the discovery of CNTs and focus on CNTs produced by arc discharge in hydrogen atmosphere, which is little explored outside my laboratory. DC arc discharge evaporation of pure graphite rod in pure hydrogen gas results in multi-walled carbon nanotubes (MWCNTs) of high crystallinity in the cathode deposit. As-grown MWCNTs have very narrow inner diameter. Raman spectra of these MWCNTs show high-intensity G-band, unusual high-frequency radial breathing mode at 570 cm(-1), and a new characteristic peak near 1850 cm(-1). Exciting carbon nanowires (CNWs), consisting of a linear carbon chain in the center of MWCNTs are also produced. Arc evaporation of graphite rod containing metal catalysts results in single-wall carbon nanotubes (SWCNTs) in the whole chamber like macroscopic webs. Two kinds of arc method have been developed to produce SWCNTs: Arc plasma jet (APJ) and Ferrum-Hydrogen (FH) arc methods. Some new purification methods for as-produced SWCNTs are reviewed. Finally, double-walled carbon nanotubes (DWCNTs) are also described.
Solving a real-world problem using an evolving heuristically driven schedule builder.
Hart, E; Ross, P; Nelson, J
1998-01-01
This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Method for synthesizing boracities
Wolf, Gary A [Kennewick, WA
1982-01-01
A method for producing boracites is disclosed in which a solution of divalent metal acetate, boric acid, and halogen acid is evaporated to dryness and the resulting solid is heated in an inert atmosphere under pressure.
A note on the computation of antenna-blocking shadows
NASA Technical Reports Server (NTRS)
Levy, R.
1993-01-01
A simple and readily applied method is provided to compute the shadow on the main reflector of a Cassegrain antenna, when cast by the subreflector and the subreflector supports. The method entails some convenient minor approximations that will produce results similar to results obtained with a lengthier, mainframe computer program.
Comparison of Dam Breach Parameter Estimators
2008-01-01
of the methods, when used in the HEC - RAS simulation model , produced comparable results. The methods tested suggest use of ...characteristics of a dam breach, use of those parameters within the unsteady flow routing model HEC - RAS , and the computation and display of the resulting...implementation of these breach parameters in
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
Effect of chlorination by-products on the quantitation of microcystins in finished drinking water.
Rosenblum, Laura; Zaffiro, Alan; Adams, William A; Wendelken, Steven C
2017-11-01
Microcystins are toxic peptides that can be produced by cyanobacteria in harmful algal blooms (HABs). Various analytical techniques have been developed to quantify microcystins in drinking water, including liquid chromatography tandem mass spectrometry (LC/MS/MS), enzyme linked immunosorbent assay (ELISA), and oxidative cleavage to produce 2-methyl-3-methoxy-4-phenylbutyric acid (MMPB) with detection by LC/MS/MS, the "MMPB method". Both the ELISA and MMPB methods quantify microcystins by detecting a portion of the molecule common to most microcystins. However, there is little research evaluating the effect of microcystin chlorination by-products potentially produced during drinking water treatment on analytical results. To evaluate this potential, chlorinated drinking water samples were fortified with various microcystin congeners in bench-scale studies. The samples were allowed to react, followed by a comparison of microcystin concentrations measured using the three methods. The congener-specific LC/MS/MS method selectively quantified microcystins and was not affected by the presence of chlorination by-products. The ELISA results were similar to those obtained by LC/MS/MS for most microcystin congeners, but results deviated for a particular microcystin containing a variable amino acid susceptible to oxidation. The concentrations measured by the MMPB method were at least five-fold higher than the concentrations of microcystin measured by the other methods and demonstrate that detection of MMPB does not necessarily correlate to intact microcystin toxins in finished drinking water. Published by Elsevier Ltd.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Biofilm formation by strains of Leuconostoc citreum and L. mesenteroides
USDA-ARS?s Scientific Manuscript database
Aims: To compare for the first time biofilm formation among strains of Leuconostoc citreum and L. mesenteroides that produce varying types of extracellular glucans. Methods and Results: Twelve strains of Leuconostoc sp. that produce extracellular glucans were compared for their capacity to produ...
Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.
Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung
2010-11-01
Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.
Assessing the feasibility of using produced water for irrigation in Colorado.
Dolan, Flannery C; Cath, Tzahi Y; Hogue, Terri S
2018-06-01
The Colorado Water Plan estimates as much as 0.8 million irrigated acres may dry up statewide from agricultural to municipal and industrial transfers. To help mitigate this loss, new sources of water are being explored in Colorado. One such source may be produced water. Oil and gas production in 2016 alone produced over 300 million barrels of produced water. Currently, the most common method of disposal of produced water is deep well injection, which is costly and has been shown to cause induced seismicity. Treating this water to agricultural standards eliminates the need to dispose of this water and provides a new source of water. This research explores which counties in Colorado may be best suited to reusing produced water for agriculture based on a combined index of need, quality of produced water, and quantity of produced water. The volumetric impact of using produced water for agricultural needs is determined for the top six counties. Irrigation demand is obtained using evapotranspiration estimates from a range of methods, including remote sensing products and ground-based observations. The economic feasibility of treating produced water to irrigation standards is also determined using an integrated decision selection tool (iDST). We find that produced water can make a substantial volumetric impact on irrigation demand in some counties. Results from the iDST indicate that while costs of treating produced water are higher than the cost of injection into private disposal wells, the costs are much less than disposal into commercial wells. The results of this research may aid in the transition between viewing produced water as a waste product and using it as a tool to help secure water for the arid west. Copyright © 2018 Elsevier B.V. All rights reserved.
Durbin, Gregory W; Salter, Robert
2006-01-01
The Ecolite High Volume Juice (HVJ) presence-absence method for a 10-ml juice sample was compared with the U.S. Food and Drug Administration Bacteriological Analytical Manual most-probable-number (MPN) method for analysis of artificially contaminated orange juices. Samples were added to Ecolite-HVJ medium and incubated at 35 degrees C for 24 to 48 h. Fluorescent blue results were positive for glucuronidase- and galactosidase-producing microorganisms, specifically indicative of about 94% of Escherichia coli strains. Four strains of E. coli were added to juices at concentrations of 0.21 to 6.8 CFU/ ml. Mixtures of enteric bacteria (Enterobacter plus Klebsiella, Citrobacter plus Proteus, or Hafnia plus Citrobacter plus Enterobacter) were added to simulate background flora. Three orange juice types were evaluated (n = 10) with and without the addition of the E. coli strains. Ecolite-HVJ produced 90 of 90 (10 of 10 samples of three juice types, each inoculated with three different E. coli strains) positive (blue-fluorescent) results with artificially contaminated E. coli that had MPN concentrations of <0.3 to 9.3 CFU/ml. Ten of 30 E. coli ATCC 11229 samples with MPN concentrations of <0.3 CFU/ml were identified as positive with Ecolite-HVJ. Isolated colonies recovered from positive Ecolite-HVJ samples were confirmed biochemically as E. coli. Thirty (10 samples each of three juice types) negative (not fluorescent) results were obtained for samples contaminated with only enteric bacteria and for uninoculated control samples. A juice manufacturer evaluated citrus juice production with both the Ecolite-HVJ and Colicomplete methods and recorded identical negative results for 95 20-ml samples and identical positive results for 5 20-ml samples artificially contaminated with E. coli. The Ecolite-HVJ method requires no preenrichment and subsequent transfer steps, which makes it a simple and easy method for use by juice producers.
A method to estimate the effect of deformable image registration uncertainties on daily dose mapping
Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin
2012-01-01
Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766
Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.
2007-03-13
Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.
Tuominen, Mark [Shutesbury, MA; Schotter, Joerg [Bielefeld, DE; Thurn-Albrecht, Thomas [Freiburg, DE; Russell, Thomas P [Amherst, MA
2009-08-11
Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Gebreyohannes, Gebreselema; Moges, Feleke; Sahile, Samuel; Raja, Nagappan
2013-01-01
Objective To isolate, evaluate and characterize potential antibiotic producing actinomycetes from water and sediments of Lake Tana, Ethiopia. Methods A total of 31 strains of actinomycetes were isolated and tested against Gram positive and Gram negative bacterial strains by primary screening. In the primary screening, 11 promising isolates were identified and subjected to solid state and submerged state fermentation methods to produce crude extracts. The fermented biomass was extracted by organic solvent extraction method and tested against bacterial strains by disc and agar well diffusion methods. The isolates were characterized by using morphological, physiological and biochemical methods. Results The result obtained from agar well diffusion method was better than disc diffusion method. The crude extract showed higher inhibition zone against Gram positive bacteria than Gram negative bacteria. One-way analysis of variance confirmed most of the crude extracts were statistically significant at 95% confidence interval. The minimum inhibitory concentration and minimum bactericidal concentration of crude extracts were 1.65 mg/mL and 3.30 mg/mL against Staphylococcus aureus, and 1.84 mg/mL and 3.80 mg/mL against Escherichia coli respectively. The growth of aerial and substrate mycelium varied in different culture media used. Most of the isolates were able to hydrolysis starch and urea; able to survive at 5% concentration of sodium chloride; optimum temperature for their growth was 30 °C. Conclusions The results of the present study revealed that freshwater actinomycetes of Lake Tana appear to have immense potential as a source of antibacterial compounds. PMID:23730554
Method for producing highly reflective metal surfaces
Arnold, J.B.; Steger, P.J.; Wright, R.R.
1982-03-04
The invention is a novel method for producing mirror surfaces which are extremely smooth and which have high optical reflectivity. The method includes depositing, by electrolysis, an amorphous layer of nickel on an article and then diamond-machining the resulting nickel surface to increase its smoothness and reflectivity. The machined nickel surface then is passivated with respect to the formation of bonds with electrodeposited nickel. Nickel then is electrodeposited on the passivated surface to form a layer of electroplated nickel whose inside surface is a replica of the passivated surface. The mandrel then may be-re-passivated and provided with a layer of electrodeposited nickel, which is then recovered from the mandrel providing a second replica. The mandrel can be so re-used to provide many such replicas. As compared with producing each mirror-finished article by plating and diamond-machining, the new method is faster and less expensive.
Method for producing highly reflective metal surfaces
Arnold, Jones B.; Steger, Philip J.; Wright, Ralph R.
1983-01-01
The invention is a novel method for producing mirror surfaces which are extremely smooth and which have high optical reflectivity. The method includes electrolessly depositing an amorphous layer of nickel on an article and then diamond-machining the resulting nickel surface to increase its smoothness and reflectivity. The machined nickel surface then is passivated with respect to the formation of bonds with electrodeposited nickel. Nickel then is electrodeposited on the passivated surface to form a layer of electroplated nickel whose inside surface is a replica of the passivated surface. The electroplated nickel layer then is separated from the passivated surface. The mandrel then may be re-passivated and provided with a layer of electrodeposited nickel, which is then recovered from the mandrel providing a second replica. The mandrel can be so re-used to provide many such replicas. As compared with producing each mirror-finished article by plating and diamond-machining, the new method is faster and less expensive.
A new method for the preparation of polymeric porous layer open tubular columns for GC application
NASA Technical Reports Server (NTRS)
Shen, T. C.; Wang, M. L.
1995-01-01
A new method to prepare polymeric PLOT columns by using in situ polymerization technology is described. The method involves a straightforward in situ polymerization of the monomer. The polymer produced is directly coated on the metal tubing. This eliminates many of the steps needed in conventional polymeric PLOT column preparation. Our method is easy to operate and produces very reproducible columns, as shown previously (T. C. Shen. J. Chromatogr. Sci. 30, 239, 1992). The effects of solvents, tubing pretreatments, initiators and reaction temperatures in the preparation of PLOT columns are studied. Several columns have been developed to separate (1) highly polar compounds, such as water and ammonia or water and HCN, and (2) hydrocarbons and inert gases. A recent improvement has allowed us to produce bonded polymeric PLOT columns. These were studied, and the results are included also.
Development of Infrared Radiation Heating Method for Sustainable Tomato Peeling
USDA-ARS?s Scientific Manuscript database
Although lye peeling is the widely industrialized method for producing high quality peeled fruit and vegetable products, the peeling method has resulted in negative impacts by significantly exerting both environmental and economic pressure on the tomato processing industry due to its associated sali...
Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging
Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.
2014-01-01
Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602
Method for producing small hollow spheres
Hendricks, C.D.
1979-01-09
Method is disclosed for producing small hollow spheres of glass, metal or plastic, wherein the sphere material is mixed with or contains as part of the composition a blowing agent which decomposes at high temperature (T [approx gt] 600 C). As the temperature is quickly raised, the blowing agent decomposes and the resulting gas expands from within, thus forming a hollow sphere of controllable thickness. The thus produced hollow spheres (20 to 10[sup 3] [mu]m) have a variety of application, and are particularly useful in the fabrication of targets for laser implosion such as neutron sources, laser fusion physics studies, and laser initiated fusion power plants. 1 fig.
Method and apparatus for producing small hollow spheres
Hendricks, Charles D.
1979-01-01
Method and apparatus for producing small hollow spheres of glass, metal or plastic, wherein the sphere material is mixed with or contains as part of the composition a blowing agent which decomposes at high temperature (T.gtoreq.600.degree. C.). As the temperature is quickly raised, the blowing agent decomposes and the resulting gas expands from within, thus forming a hollow sphere of controllable thickness. The thus produced hollow spheres (20 to 10.sup.3 .mu.m) have a variety of application, and are particularly useful in the fabrication of targets for laser implosion such as neutron sources, laser fusion physics studies, and laser initiated fusion power plants.
Method for producing small hollow spheres
Hendricks, Charles D. [Livermore, CA
1979-01-09
Method for producing small hollow spheres of glass, metal or plastic, wherein the sphere material is mixed with or contains as part of the composition a blowing agent which decomposes at high temperature (T .gtorsim. 600.degree. C). As the temperature is quickly raised, the blowing agent decomposes and the resulting gas expands from within, thus forming a hollow sphere of controllable thickness. The thus produced hollow spheres (20 to 10.sup.3 .mu.m) have a variety of application, and are particularly useful in the fabrication of targets for laser implosion such as neutron sources, laser fusion physics studies, and laser initiated fusion power plants.
Human-machine interaction to disambiguate entities in unstructured text and structured datasets
NASA Astrophysics Data System (ADS)
Ward, Kevin; Davenport, Jack
2017-05-01
Creating entity network graphs is a manual, time consuming process for an intelligence analyst. Beyond the traditional big data problems of information overload, individuals are often referred to by multiple names and shifting titles as they advance in their organizations over time which quickly makes simple string or phonetic alignment methods for entities insufficient. Conversely, automated methods for relationship extraction and entity disambiguation typically produce questionable results with no way for users to vet results, correct mistakes or influence the algorithm's future results. We present an entity disambiguation tool, DRADIS, which aims to bridge the gap between human-centric and machinecentric methods. DRADIS automatically extracts entities from multi-source datasets and models them as a complex set of attributes and relationships. Entities are disambiguated across the corpus using a hierarchical model executed in Spark allowing it to scale to operational sized data. Resolution results are presented to the analyst complete with sourcing information for each mention and relationship allowing analysts to quickly vet the correctness of results as well as correct mistakes. Corrected results are used by the system to refine the underlying model allowing analysts to optimize the general model to better deal with their operational data. Providing analysts with the ability to validate and correct the model to produce a system they can trust enables them to better focus their time on producing higher quality analysis products.
Smoking education programs 1960-1976.
Thompson, E L
1978-01-01
This paper is a review of published reports, in English, of educational programs designed to change smoking behavior. Attempts to change the smoking behavior of young people have included anti-smoking campaigns, youth-to-youth programs, and a variety of message themes and teaching methods. Instruction has been presented both by teachers who were committed or persuasive and by teachers who were neutral or presented both sides of the issue. Didactic teaching, group discussion, individual study, peer instruction, and mass media have been employed. Health effects of smoking, both short- and long-term effects, have been emphasized. Most methods used with youth have shown little success. Studies of other methods have produced contradictory results. Educational programs for adults have included large scale anti-smoking campaigns, smoking cessation clinics, and a variety of more specific withdrawal methods. These methods have included individual counseling, emotional role playing, aversive conditioning, desensitization, and specific techniques to reduce the likelihood that smoking will occur in situations previously associated with smoking. Some of these techniques have produced poor results while studies of other methods have shown inconsistent results. The two methods showing the most promise are individual counseling and smoking withdrawal clinics. PMID:25026
Smoking education programs 1960-1976.
Thompson, E L
1978-03-01
This paper is a review of published reports, in English, of educational programs designed to change smoking behavior. Attempts to change the smoking behavior of young people have included anti-smoking campaigns, youth-to-youth programs, and a variety of message themes and teaching methods. Instruction has been presented both by teachers who were committed or persuasive and by teachers who were neutral or presented both sides of the issue. Didactic teaching, group discussion, individual study, peer instruction, and mass media have been employed. Health effects of smoking, both short- and long-term effects, have been emphasized. Most methods used with youth have shown little success. Studies of other methods have produced contradictory results. Educational programs for adults have included large scale anti-smoking campaigns, smoking cessation clinics, and a variety of more specific withdrawal methods. These methods have included individual counseling, emotional role playing, aversive conditioning, desensitization, and specific techniques to reduce the likelihood that smoking will occur in situations previously associated with smoking. Some of these techniques have produced poor results while studies of other methods have shown inconsistent results. The two methods showing the most promise are individual counseling and smoking withdrawal clinics.
Szczygiel, Edward J; Harte, Janice B; Strasburg, Gale M; Cho, Sungeun
2017-09-01
Food products produced with bean ingredients are gaining in popularity among consumers due to the reported health benefits. Navy bean (Phaseolus vulgaris) powder produced through extrusion can be considered as a resource-efficient alternative to conventional methods, which often involve high water inputs. Therefore, navy bean powders produced with extrusion and conventional methods were assessed for the impact of processing on consumer liking in end-use products and odor-active compounds. Consumer acceptance results reveal significant differences in flavor, texture and overall acceptance scores of several products produced with navy bean powder. Crackers produced with extruded navy bean powder received higher hedonic flavor ratings than those produced with commercial navy bean powder (P < 0.001). GC-O data showed that the commercial powder produced through conventional processing had much greater contents of several aliphatic aldehydes commonly formed via lipid oxidation, such as hexanal, octanal and nonanal with descriptors of 'grassy', 'nutty', 'fruity', 'dusty', and 'cleaner', compared to the extruded powder. Extrusion processed navy bean powders were preferred over commercial powders for certain navy bean powder applications. This is best explained by substantial differences in aroma profiles of the two powders that may have been caused by lipid oxidation. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Measurement of Charged Pions from Neutrino-produced Nuclear Resonance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Clifford N.
2014-01-01
A method for identifying stopped pions in a high-resolution scintillator bar detector is presented. I apply my technique to measure the axial mass M Δ Afor production of the Δ(1232) resonance by neutrino, with the result M Δ A = 1.16±0.20 GeV (68% CL) (limited by statistics). The result is produced from the measured spectrum of reconstructed momentum-transfer Q 2. I proceed by varying the value of M Δ A in a Rein-Sehgal-based Monte Carlo to produce the best agreement, using shape only (not normalization). The consistency of this result with recent reanalyses of previous bubble-chamber experiments is discussed.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-01-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system. PMID:24912488
NASA Astrophysics Data System (ADS)
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Irrigation water sources and irrigation application methods used by U.S. plant nursery producers
NASA Astrophysics Data System (ADS)
Paudel, Krishna P.; Pandit, Mahesh; Hinson, Roger
2016-02-01
We examine irrigation water sources and irrigation methods used by U.S. nursery plant producers using nested multinomial fractional regression models. We use data collected from the National Nursery Survey (2009) to identify effects of different firm and sales characteristics on the fraction of water sources and irrigation methods used. We find that regions, sales of plants types, farm income, and farm age have significant roles in what water source is used. Given the fraction of alternative water sources used, results indicated that use of computer, annual sales, region, and the number of IPM practices adopted play an important role in the choice of irrigation method. Based on the findings from this study, government can provide subsidies to nursery producers in water deficit regions to adopt drip irrigation method or use recycled water or combination of both. Additionally, encouraging farmers to adopt IPM may enhance the use of drip irrigation and recycled water in nursery plant production.
Method for removal of beryllium contamination from an article
Simandl, Ronald F.; Hollenbeck, Scott M.
2012-12-25
A method of removal of beryllium contamination from an article is disclosed. The method typically involves dissolving polyisobutylene in a solvent such as hexane to form a tackifier solution, soaking the substrate in the tackifier to produce a preform, and then drying the preform to produce the cleaning medium. The cleaning media are typically used dry, without any liquid cleaning agent to rub the surface of the article and remove the beryllium contamination below a non-detect level. In some embodiments no detectible residue is transferred from the cleaning wipe to the article as a result of the cleaning process.
Lunar dust simulant containing nanophase iron and method for making the same
NASA Technical Reports Server (NTRS)
Hung, Chin-cheh (Inventor); McNatt, Jeremiah (Inventor)
2012-01-01
A lunar dust simulant containing nanophase iron and a method for making the same. Process (1) comprises a mixture of ferric chloride, fluorinated carbon powder, and glass beads, treating the mixture to produce nanophase iron, wherein the resulting lunar dust simulant contains .alpha.-iron nanoparticles, Fe.sub.2O.sub.3, and Fe.sub.3O.sub.4. Process (2) comprises a mixture of a material of mixed-metal oxides that contain iron and carbon black, treating the mixture to produce nanophase iron, wherein the resulting lunar dust simulant contains .alpha.-iron nanoparticles and Fe.sub.3O.sub.4.
Leung, Chung-Chu
2006-03-01
Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.
Kaufmann, A; Maden, K; Leisser, W; Matera, M; Gude, T
2005-11-01
Inorganic polyphosphates (di-, tri- and higher polyphosphates) can be used to treat fish, fish fillets and shrimps in order to improve their water-binding capacity. The practical relevance of this treatment is a significant gain of weight caused by the retention/uptake of water and natural juice into the fish tissues. This practice is legal; however, the use of phosphates has to be declared. The routine control testing of fish for the presence of polyphosphates, produced some results that were difficult to explain. One of the two analytical methods used determined low diphosphate concentrations in a number of untreated samples, while the other ion chromatography (IC) method did not detect them. This initiated a number of investigations: results showed that polyphosphates in fish and shrimps tissue undergo a rapid enzymatic degradation, producing the ubiquitous orthophosphate. This led to the conclusion that sensitive analytical methods are required in order to detect previous polyphosphate treatment of a sample. The polyphosphate concentrations detected by one of the analytical methods could not be explained by the degradation of endogenous high-energy nucleotides like ATP into diphosphate, but by a coeluting compound. Further investigations by LC-MS-MS proved that the substance responsible for the observed peak was inosine monophsosphate (IMP) and not as thought the inorganic diphosphate. The method producing the false-positive result was modified and both methods were ultimately able to detect polyphosphates well separated from natural nucleotides. Polyphosphates could no longer be detected (<0.5 mg kg-1) after modification of the analytical methodology. The relevance of these findings lies in the fact that similar analytical methods are employed in various control laboratories, which might lead to false interpretation of measurements.
Practical considerations for measuring hydrogen concentrations in groundwater
Chapelle, F.H.; Vroblesky, D.A.; Woodward, J.C.; Lovley, D.R.
1997-01-01
Several practical considerations for measuring concentrations of dissolved molecular hydrogen (H2) in groundwater including 1 sampling methods 2 pumping methods and (3) effects of well casing materials were evaluated. Three different sampling methodologies (a downhole sampler, a gas- stripping method, and a diffusion sampler) were compared. The downhole sampler and gas-stripping methods gave similar results when applied to the same wells, the other hand, appeared to The diffusion sampler, on overestimate H2 concentrations relative to the downhole sampler. Of these methods, the gas-stripping method is better suited to field conditions because it is faster (~ 30 min for a single analysis as opposed to 2 h for the downhole sampler or 8 h for the diffusion sampler), the analysis is easier (less sample manipulation is required), and the data computations are more straightforward (H2 concentrations need not be corrected for water sample volume). Measurement of H2 using the gas-stripping method can be affected by different pumping equipment. Peristaltic, piston, and bladder pumps all gave similar results when applied to water produced from the same well. It was observed, however, that peristaltic-pumped water (which draws water under a negative pressure) enhanced the gas-stripping process and equilibrated slightly faster than either piston or bladder pumps (which push water under a positive pressure). A direct current(dc) electrically driven submersible pump was observed to produce H2 and was not suitable for measuring H2 in groundwater. Measurements from two field sites indicate that iron or steel well casings, produce H2, which masks H2 concentrations in groundwater. PVC-cased wells or wells cased with other materials that do not produce H2 are necessary for measuring H2 concentrations in groundwater.Several practical considerations for measuring concentrations of dissolved molecular hydrogen in groundwater including sampling methods, pumping methods, and effects of well casing materials were evaluated. The downhole sampler and gas-stripping methods gave similar results when applied to the same wells. The diffusional sampler appears to overestimate H2 concentrations relative to the downhole sampler. Gas-stripping method is better for a single analysis and the data computations are more straightforward. Measurement of H2 using the gas-stripping method can be affected by different pumping equipment.
First report of a mycolactone-producing Mycobacterium infection in fish agriculture in Belgium.
Stragier, Pieter; Hermans, Katleen; Stinear, Tim; Portaels, Françoise
2008-09-01
In the past few years, a mycolactone-producing subgroup of the Mycobacterium marinum complex has been identified and analyzed. These IS2404-positive species cause pathology in frogs and fish. A recently isolated mycobacterial strain from a fish in Belgium was analyzed using a variety of molecular methods and the results were identical to those obtained from a mycolactone-producing M. marinum from Israel.
Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples.
Kasturi, Kuppuswamy N; Drgon, Tomas
2017-07-15
The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA , group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non- Salmonella organisms. The invA - and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella -differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the V itek i mmuno d iagnostic a ssay s ystem (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples.
Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples
Drgon, Tomas
2017-01-01
ABSTRACT The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA, group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non-Salmonella organisms. The invA- and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella-differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S. Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the Vitek immunodiagnostic assay system (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples. PMID:28500041
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
Development of Technology and Installation for Biohydrogen Production
NASA Astrophysics Data System (ADS)
Pridvizhkin, S. V.; Vyguzova, M. A.; Bazhenov, O. V.
2017-11-01
The article discusses the method for hydrogen production and the device this method application. The relevance of the use of renewable fuels and the positive impact of renewable energy on the environment and the economy is also considered. The presented technology relates to a method for hydrogen production from organic materials subject to anaerobic fermentation, such as the components of solid municipal waste, sewage sludge and agricultural enterprises wastes, sewage waste. The aim of the research is to develop an effective eco-friendly technology for hydrogen producing within an industrial project To achieve the goal, the following issues have been addressed in the course of the study: - development of the process schemes for hydrogen producing from organic materials; - development of the technology for hydrogen producing; - optimization of a biogas plant with the aim of hydrogen producing at one of the fermentation stages; - approbation of the research results. The article is recommended for engineers and innovators working on the renewable energy development issues.
a Numerical Comparison of Langrange and Kane's Methods of AN Arm Segment
NASA Astrophysics Data System (ADS)
Rambely, Azmin Sham; Halim, Norhafiza Ab.; Ahmad, Rokiah Rozita
A 2-D model of a two-link kinematic chain is developed using two dynamics equations of motion, namely Kane's and Lagrange Methods. The dynamics equations are reduced to first order differential equation and solved using modified Euler and fourth order Runge Kutta to approximate the shoulder and elbow joint angles during a smash performance in badminton. Results showed that Runge-Kutta produced a better and exact approximation than that of modified Euler and both dynamic equations produced better absolute errors.
Speech-Enabled Interfaces for Travel Information Systems with Large Grammars
NASA Astrophysics Data System (ADS)
Zhao, Baoli; Allen, Tony; Bargiela, Andrzej
This paper introduces three grammar-segmentation methods capable of handling the large grammar issues associated with producing a real-time speech-enabled VXML bus travel application for London. Large grammars tend to produce relatively slow recognition interfaces and this work shows how this limitation can be successfully addressed. Comparative experimental results show that the novel last-word recognition based grammar segmentation method described here achieves an optimal balance between recognition rate, speed of processing and naturalness of interaction.
Atmospheric Blocking and Intercomparison of Objective Detection Methods: Flow Field Characteristics
NASA Astrophysics Data System (ADS)
Pinheiro, M. C.; Ullrich, P. A.; Grotjahn, R.
2017-12-01
A number of objective methods for identifying and quantifying atmospheric blocking have been developed over the last couple of decades, but there is variable consensus on the resultant blocking climatology. This project examines blocking climatologies as produced by three different methods: two anomaly-based methods, and the geopotential height gradient method of Tibaldi and Molteni (1990). The results highlight the differences in blocking that arise from the choice of detection method, with emphasis on the physical characteristics of the flow field and the subsequent effects on the blocking patterns that emerge.
Direct Discrete Method for Neutronic Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid
The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for amore » cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)« less
Riley, Paul W.; Gallea, Benoit; Valcour, Andre
2017-01-01
Background: Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. Methods: The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Results: Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. Conclusions: To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process. PMID:28706751
NASA Astrophysics Data System (ADS)
Fitriana, R.; Saragih, J.; Luthfiana, N.
2017-12-01
R Bakery company is a company that produces bread every day. Products that produced in that company have many different types of bread. Products are made in the form of sweet bread and wheat bread which have different tastes for every types of bread. During the making process, there were defects in the products which the defective product turns into reject product. Types of defects that are produced include burnt, sodden bread and shapeless bread. To find out the information about the defects that have been produced then by applying a designed model business intelligence system to create database and data warehouse. By using model business Intelligence system, it will generate useful information such as how many defect that produced by each of the bakery products. To make it easier to obtain such information, it can be done by using data mining method which data that we get is deep explored. The method of data mining is using k-means clustering method. The results of this intelligence business model system are cluster 1 with little amount of defect, cluster 2 with medium amount of defect and cluster 3 with high amount of defect. From OLAP Cube method can be seen that the defect generated during the 7 months period of 96,744 pieces.
Impact of different post-harvest processing methods on the chemical compositions of peony root.
Zhu, Shu; Shirakawa, Aimi; Shi, Yanhong; Yu, Xiaoli; Tamura, Takayuki; Shibahara, Naotoshi; Yoshimatsu, Kayo; Komatsu, Katsuko
2018-06-01
The impact of key processing steps such as boiling, peeling, drying and storing on chemical compositions and morphologic features of the produced peony root was investigated in detail by applying 15 processing methods to fresh roots of Paeonia lactiflora and then monitoring contents of eight main components, as well as internal root color. The results showed that low temperature (4 °C) storage of fresh roots for approximately 1 month after harvest resulted in slightly increased and stable content of paeoniflorin, which might be due to suppression of enzymatic degradation. This storage also prevented roots from discoloring, facilitating production of favorable bright color roots. Boiling process triggered decomposition of polygalloylglucoses, thereby leading to a significant increase in contents of pentagalloylglucose and gallic acid. Peeling process resulted in a decrease of albiflorin and catechin contents. As a result, an optimized and practicable processing method ensuring high contents of the main active components in the produced root was developed.
Method for producing hard-surfaced tools and machine components
McHargue, Carl J.
1985-01-01
In one aspect, the invention comprises a method for producing tools and machine components having superhard crystalline-ceramic work surfaces. Broadly, the method comprises two steps: A tool or machine component having a ceramic near-surface region is mounted in ion-implantation apparatus. The region then is implanted with metal ions to form, in the region, a metastable alloy of the ions and said ceramic. The region containing the alloy is characterized by a significant increase in hardness properties, such as microhardness, fracture-toughness, and/or scratch-resistance. The resulting improved article has good thermal stability at temperatures characteristic of typical tool and machine-component uses. The method is relatively simple and reproducible.
Method for producing hard-surfaced tools and machine components
McHargue, C.J.
1981-10-21
In one aspect, the invention comprises a method for producing tools and machine components having superhard crystalline-ceramic work surfaces. Broadly, the method comprises two steps: a tool or machine component having a ceramic near-surface region is mounted in ion-implantation apparatus. The region then is implanted with metal ions to form, in the region, a metastable alloy of the ions and said ceramic. The region containing the alloy is characterized by a significant increase in hardness properties, such as microhardness, fracture-toughness, and/or scratch-resistance. The resulting improved article has good thermal stability at temperatures characteristic of typical tool and machine-component uses. The method is relatively simple and reproducible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Der Agobian, R.
1964-10-31
The shock waves Droduced by condenser discharge in a gas tube were investigated. The study was limited to wave velocities less than five times the speed of sound, propagated in gas at low pressure (several mm Hg). A method was designed and perfected for the detection of the shock waves that are insufficiently rapid to produce gas ionization. This method consisted of the creation of an autonomous plasma, before the arrival of the wave, which was then modified by the wave passage. two methods were used for the detection of phenomena accompanying the passage of the shock waves, an opticalmore » method and a radioelectric method. The qualitative study of the modifications produced on the wave passage showed the remarkable correlation existing between the results obtained by the two methods. The experimental results on the propagation laws for shock waves in a low-diameter tube agreed with theory. The variations of the coefficient oi recombination were determined as a iunction of the electron temperature, and the results were in good agreement with the Bates theory. It was shown that the electron gas of the plasma had the same increase of density as a neutral gas during the passage of a shock wave. The variations of the frequency of electron collisions on passage of the shock wave could be explained by considering the electron--ion collisions with respect to electron-- atom collisions. (J.S.R.)« less
Maritime Search and Rescue via Multiple Coordinated UAS
2017-06-12
performed by a set of UAS. Our investigation covers the detection of multiple mobile objects by a heterogeneous collection of UAS. Three methods (two...account for contingencies such as airspace deconfliction. Results are produced using simulation to verify the capability of the proposed method and to...compare the various par- titioning methods . Results from this simulation show that great gains in search efficiency can be made when the search space is
Multivariate spline methods in surface fitting
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator); Schumaker, L. L.
1984-01-01
The use of spline functions in the development of classification algorithms is examined. In particular, a method is formulated for producing spline approximations to bivariate density functions where the density function is decribed by a histogram of measurements. The resulting approximations are then incorporated into a Bayesiaan classification procedure for which the Bayes decision regions and the probability of misclassification is readily computed. Some preliminary numerical results are presented to illustrate the method.
Sekar, Ramanujam R.; Hoppie, Lyle O.
1996-01-01
A method of reducing oxides of nitrogen (NO.sub.X) in the exhaust of an internal combustion engine includes producing oxygen enriched air and nitrogen enriched air by an oxygen enrichment device. The oxygen enriched air may be provided to the intake of the internal combustion engine for mixing with fuel. In order to reduce the amount of NO.sub.X in the exhaust of the internal combustion engine, the molecular nitrogen in the nitrogen enriched air produced by the oxygen enrichment device is subjected to a corona or arc discharge so as to create a plasma and as a result, atomic nitrogen. The resulting atomic nitrogen then is injected into the exhaust of the internal combustion engine causing the oxides of nitrogen in the exhaust to be reduced into nitrogen and oxygen. In one embodiment of the present invention, the oxygen enrichment device that produces both the oxygen and nitrogen enriched air can include a selectively permeable membrane.
Production of extracellular chitinase Beauveria bassiana under submerged fermentation conditions
NASA Astrophysics Data System (ADS)
Elawati, N. E.; Pujiyanto, S.; Kusdiyantini, E.
2018-05-01
Chitinase-producing microbes have attracted attention as one of the potential agents for control of phytopathogenic fungi and insect pests. The fungus that potentially produces chitinase is Beauveria bassiana. This study aims to determine the growth curve and chitinase activities of B. bassiana isolated from Helopeltis antonii insects after application. Method of measuring growth curve was done by dry cell period method, while for measurement of enzyme activity done by measuring absorbance at spectrophotometer. The results showed optimum growth time of B. bassiana with the highest cell count of 0.031 g on day 4 which was log phase, while the highest enzyme activity was 0,585 U / mL on the 4th day for 7 days incubation. Based on these results when correlated growth with enzyme production, chitinase enzyme products are produced in log phase and categorized as primary metabolism.
You, Qiushi; Li, Qingqing; Zheng, Hailing; Hu, Zhiwen; Zhou, Yang; Wang, Bing
2017-09-06
Recently, much interest has been paid to the separation of silk produced by Bombyx mori from silk produced by other species and tracing the beginnings of silk cultivation from wild silk exploitation. In this paper, significant differences between silks from Bombyx mori and other species were found by microscopy and spectroscopy, such as morphology, secondary structure, and amino acid composition. For further accurate identification, a diagnostic antibody was designed by comparing the peptide sequences of silks produced by Bombyx mori and other species. The results of the noncompetitive indirect enzyme-linked immunosorbent assay (ELISA) indicated that the antibody that showed good sensitivity and high specificity can definitely discern silk produced by Bombyx mori from silk produced by wild species. Thus, the antibody-based immunoassay has the potential to be a powerful tool for tracing the beginnings of silk cultivation. In addition, combining the sensitive, specific, and convenient ELISA technology with other conventional methods can provide more in-depth and accurate information for species identification.
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
Using Vision Metrology System for Quality Control in Automotive Industries
NASA Astrophysics Data System (ADS)
Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.
2012-07-01
The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.
Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet
2014-01-01
Aim: Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. Methods: DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. Results: The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Conclusion: Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE. PMID:24688314
Scanning electron microscope image signal-to-noise ratio monitoring for micro-nanomanipulation.
Marturi, Naresh; Dembélé, Sounkalo; Piat, Nadine
2014-01-01
As an imaging system, scanning electron microscope (SEM) performs an important role in autonomous micro-nanomanipulation applications. When it comes to the sub micrometer range and at high scanning speeds, the images produced by the SEM are noisy and need to be evaluated or corrected beforehand. In this article, the quality of images produced by a tungsten gun SEM has been evaluated by quantifying the level of image signal-to-noise ratio (SNR). In order to determine the SNR, an efficient and online monitoring method is developed based on the nonlinear filtering using a single image. Using this method, the quality of images produced by a tungsten gun SEM is monitored at different experimental conditions. The derived results demonstrate the developed method's efficiency in SNR quantification and illustrate the imaging quality evolution in SEM. © 2014 Wiley Periodicals, Inc.
Evaluation of several methods of applying sewage effluent to forested soils in the winter.
Alfred Ray Harris
1978-01-01
Surface application methods result in heat loss, deep soil frost, and surface ice accumulations; subsurface methods decrease heat loss and produce shallower frost. Distribution of effluent within the frozen soil is a function of surface application methods, piping due to macropores and biopores, and water movement due to temperature gradients. Nitrate is not...
Roman sophisticated surface modification methods to manufacture silver counterfeited coins
NASA Astrophysics Data System (ADS)
Ingo, G. M.; Riccucci, C.; Faraldi, F.; Pascucci, M.; Messina, E.; Fierro, G.; Di Carlo, G.
2017-11-01
By means of the combined use of X-ray photoelectron spectroscopy (XPS), optical microscopy (OM) and scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) the surface and subsurface chemical and metallurgical features of silver counterfeited Roman Republican coins are investigated to decipher some aspects of the manufacturing methods and to evaluate the technological ability of the Roman metallurgists to produce thin silver coatings. The results demonstrate that over 2000 ago important advances in the technology of thin layer deposition on metal substrates were attained by Romans. The ancient metallurgists produced counterfeited coins by combining sophisticated micro-plating methods and tailored surface chemical modification based on the mercury-silvering process. The results reveal that Romans were able systematically to chemically and metallurgically manipulate alloys at a micro scale to produce adherent precious metal layers with a uniform thickness up to few micrometers. The results converge to reveal that the production of forgeries was aimed firstly to save expensive metals as much as possible allowing profitable large-scale production at a lower cost. The driving forces could have been a lack of precious metals, an unexpected need to circulate coins for trade and/or a combinations of social, political and economic factors that requested a change in money supply. Finally, some information on corrosion products have been achieved useful to select materials and methods for the conservation of these important witnesses of technology and economy.
Multilaboratory evaluation of methods for detecting enteric viruses in soils.
Hurst, C J; Schaub, S A; Sobsey, M D; Farrah, S R; Gerba, C P; Rose, J B; Goyal, S M; Larkin, E P; Sullivan, R; Tierney, J T
1991-01-01
Two candidate methods for the recovery and detection of viruses in soil were subjected to round robin comparative testing by members of the American Society for Testing and Materials D19:24:04:04 Subcommittee Task Group. Selection of the methods, designated "Berg" and "Goyal," was based on results of an initial screening which indicated that both met basic criteria considered essential by the task group. Both methods utilized beef extract solutions to achieve desorption and recovery of viruses from representative soils: a fine sand soil, an organic muck soil, a sandy loam soil, and a clay loam soil. One of the two methods, Goyal, also used a secondary concentration of resulting soil eluants via low-pH organic flocculation to achieve a smaller final assay volume. Evaluation of the two methods was simultaneously performed in replicate by nine different laboratories. Each of the produced samples was divided into portions, and these were respectively subjected to quantitative viral plaque assay by both the individual, termed independent, laboratory which had done the soil processing and a single common reference laboratory, using a single cell line and passage level. The Berg method seemed to produce slightly higher virus recovery values; however, the differences in virus assay titers for samples produced by the two methods were not statistically significant (P less than or equal to 0.05) for any one of the four soils. Despite this lack of a method effect, there was a statistically significant laboratory effect exhibited by assay titers from the independent versus reference laboratories for two of the soils, sandy loam and clay loam. PMID:1849712
Multilaboratory evaluation of methods for detecting enteric viruses in soils.
Hurst, C J; Schaub, S A; Sobsey, M D; Farrah, S R; Gerba, C P; Rose, J B; Goyal, S M; Larkin, E P; Sullivan, R; Tierney, J T
1991-02-01
Two candidate methods for the recovery and detection of viruses in soil were subjected to round robin comparative testing by members of the American Society for Testing and Materials D19:24:04:04 Subcommittee Task Group. Selection of the methods, designated "Berg" and "Goyal," was based on results of an initial screening which indicated that both met basic criteria considered essential by the task group. Both methods utilized beef extract solutions to achieve desorption and recovery of viruses from representative soils: a fine sand soil, an organic muck soil, a sandy loam soil, and a clay loam soil. One of the two methods, Goyal, also used a secondary concentration of resulting soil eluants via low-pH organic flocculation to achieve a smaller final assay volume. Evaluation of the two methods was simultaneously performed in replicate by nine different laboratories. Each of the produced samples was divided into portions, and these were respectively subjected to quantitative viral plaque assay by both the individual, termed independent, laboratory which had done the soil processing and a single common reference laboratory, using a single cell line and passage level. The Berg method seemed to produce slightly higher virus recovery values; however, the differences in virus assay titers for samples produced by the two methods were not statistically significant (P less than or equal to 0.05) for any one of the four soils. Despite this lack of a method effect, there was a statistically significant laboratory effect exhibited by assay titers from the independent versus reference laboratories for two of the soils, sandy loam and clay loam.
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
Moon, Jordan R; Hull, Holly R; Tobkin, Sarah E; Teramoto, Masaru; Karabulut, Murat; Roberts, Michael D; Ryan, Eric D; Kim, So Jung; Dalbo, Vincent J; Walter, Ashley A; Smith, Abbie T; Cramer, Joel T; Stout, Jeffrey R
2007-01-01
Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. This investigation sought to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age women compared to the Siri three-compartment model (3C). Methods Thirty Caucasian women (21.1 ± 1.5 yrs; 164.8 ± 4.7 cm; 61.2 ± 6.8 kg) had their %fat estimated by BIA using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), NIR (Futrex® 6100/XL), a quadratic (SF3JPW) and linear (SF3WB) skinfold equation, air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All methods produced acceptable total error (TE) values compared to the 3C model. Both laboratory methods produced similar TE values (HW, TE = 2.4%fat; BP, TE = 2.3%fat) when compared to the 3C model, though a significant constant error (CE) was detected for HW (1.5%fat, p ≤ 0.006). The field methods produced acceptable TE values ranging from 1.8 – 3.8 %fat. BIA-AK (TE = 1.8%fat) yielded the lowest TE among the field methods, while BIA-Lohman (TE = 2.1%fat) and NIR (TE = 2.7%fat) produced lower TE values than both skinfold equations (TE > 2.7%fat) compared to the 3C model. Additionally, the SF3JPW %fat estimation equation resulted in a significant CE (2.6%fat, p ≤ 0.007). Conclusion Data suggest that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian women. When the use of a laboratory method is not feasible, NIR, BIA-AK, BIA-Lohman, SF3JPW, and SF3WB are acceptable field methods to estimate %fat in this population. PMID:17988393
Shear velocity estimates on the inner shelf off Grays Harbor, Washington, USA
Sherwood, C.R.; Lacy, J.R.; Voulgaris, G.
2006-01-01
Shear velocity was estimated from current measurements near the bottom off Grays Harbor, Washington between May 4 and June 6, 2001 under mostly wave-dominated conditions. A downward-looking pulse-coherent acoustic Doppler profiler (PCADP) and two acoustic-Doppler velocimeters (field version; ADVFs) were deployed on a tripod at 9-m water depth. Measurements from these instruments were used to estimate shear velocity with (1) a modified eddy-correlation (EC) technique, (2) the log-profile (LP) method, and (3) a dissipation-rate method. Although values produced by the three methods agreed reasonably well (within their broad ranges of uncertainty), there were important systematic differences. Estimates from the EC method were generally lowest, followed by those from the inertial-dissipation method. The LP method produced the highest values and the greatest scatter. We show that these results are consistent with boundary-layer theory when sediment-induced stratification is present. The EC method provides the most fundamental estimate of kinematic stress near the bottom, and stratification causes the LP method to overestimate bottom stress. These results remind us that the methods are not equivalent and that comparison among sites and with models should be made carefully. ?? 2006 Elsevier Ltd. All rights reserved.
Methods for producing nanoparticles using palladium salt and uses thereof
Chan, Siu-Wai; Liang, Hongying
2015-12-01
The disclosed subject matter is directed to a method for producing nanoparticles, as well as the nanoparticles produced by this method. In one embodiment, the nanoparticles produced by the disclosed method have a high defect density.
Rodríguez, Alicia; Werning, María L; Rodríguez, Mar; Bermúdez, Elena; Córdoba, Juan J
2012-12-01
A quantitative TaqMan real-time PCR (qPCR) method that includes an internal amplification control (IAC) to quantify cyclopiazonic acid (CPA)-producing molds in foods has been developed. A specific primer pair (dmaTF/dmaTR) and a TaqMan probe (dmaTp) were designed on the basis of dmaT gene which encodes the enzyme dimethylallyl tryptophan synthase involved in the biosynthesis of CPA. The IAC consisted of a 105 bp chimeric DNA fragment containing a region of the hly gene of Listeria monocytogenes. Thirty-two mold reference strains representing CPA producers and non-producers of different mold species were used in this study. All strains were tested for CPA production by high-performance liquid chromatography-mass spectrometry (HPLC-MS). The functionality of the designed qPCR method was demonstrated by the high linear relationship of the standard curves relating to the dmaT gene copy numbers and the Ct values obtained from the different CPA producers tested. The ability of the qPCR protocol to quantify CPA-producing molds was evaluated in different artificially inoculated foods. A good linear correlation was obtained over the range 1-4 log cfu/g in the different food matrices. The detection limit in all inoculated foods ranged from 1 to 2 log cfu/g. This qPCR protocol including an IAC showed good efficiency to quantify CPA-producing molds in naturally contaminated foods avoiding false negative results. This method could be used to monitor the CPA producers in the HACCP programs to prevent the risk of CPA formation throughout the food chain. Copyright © 2012 Elsevier Ltd. All rights reserved.
Process for synthesis of beryllium chloride dietherate
Bergeron, Charles; Bullard, John E.; Morgan, Evan
1991-01-01
A low temperature method of producing beryllium chloride dietherate through the addition of hydrogen chloride gas to a mixture of beryllium metal in ether in a reaction vessel is described. A reflux condenser provides an exit for hydrogen produced form the reaction. A distillation condenser later replaces the reflux condenser for purifying the resultant product.
The research from this REMAP project produced results that demonstrate various stages of an assessment strategy and produced tools including an inventory classification, field methods and multimetric biotic indices that are now available for use by environmental resource managers...
Accurate modeling and evaluation of microstructures in complex materials
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
Sim, K S; Norhisham, S
2016-11-01
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Rharmitt, Sanae; Hafidi, Majida; Hajjaj, Hassan; Scordino, Fabio; Giosa, Domenico; Giuffrè, Letterio; Barreca, Davide; Criseo, Giuseppe; Romeo, Orazio
2016-01-18
The isolation of patulin-producing Penicillia in apples collected in different markets in four localities in Morocco is reported. Fungi were identified by β-tubulin sequencing and further characterized using a specific PCR-based method targeting the isoepoxydon dehydrogenase (IDH) gene to discriminate between patulin-producing and non-producing strains. Production of patulin was also evaluated using standard cultural and biochemical methods. Results showed that 79.5% of contaminant fungi belonged to the genus Penicillium and that Penicillium expansum was the most isolated species (83.9%) followed by Penicillium chrysogenum (~9.7%) and Penicillium crustosum (~6.4%). Molecular analysis revealed that 64.5% of the Penicillium species produced the expected IDH-amplicon denoting patulin production in these strains. However, patulin production was not chemically confirmed in all P. expansum strains. The isolation of IDH(-)/patulin(+) strains poses the hypothesis that gentisylaldehyde is not a direct patulin precursor, supporting previous observations that highlighted the importance of the gentisyl alcohol in the production of this mycotoxin. Total agreement between IDH-gene detection and cultural/chemical methods employed was observed in 58% of P. expansum strains and for 100% of the other species isolated. Overall the data reported here showed a substantial genetic variability within P. expansum population from Morocco. Copyright © 2015 Elsevier B.V. All rights reserved.
Zarei, Omid; Dastmalchi, Siavoush; Hamzeh-Mivehroud, Maryam
2016-01-01
Yeasts, especially Saccharomyces cerevisiae, are one of the oldest organisms with broad spectrum of applications, owing to their unique genetics and physiology. Yeast extract, i.e. the product of yeast cells, is extensively used as nutritional resource in bacterial culture media. The aim of this study was to develop a simple, rapid and cost benefit process to produce the yeast extract. In this procedure mechanical methods such as high temperature and pressure were utilized to produce the yeast extract. The growth of the bacteria feed with the produced yeast extract was monitored in order to assess the quality of the product. The results showed that the quality of the produced yeast extract was very promising concluded from the growth pattern of bacterial cells in media prepared from this product and was comparable with that of the three commercial yeast extracts in terms of bacterial growth properties. One of the main advantages of the current method was that no chemicals and enzymes were used, leading to the reduced production cost. The method is very simple and cost effective, and can be performed in a reasonable time making it suitable for being adopted by research laboratories. Furthermore, it can be scaled up to produce large quantities for industrial applications. PMID:28243289
Vertical intensity modulation for improved radiographic penetration and reduced exclusion zone
NASA Astrophysics Data System (ADS)
Bendahan, J.; Langeveld, W. G. J.; Bharadwaj, V.; Amann, J.; Limborg, C.; Nosochkov, Y.
2016-09-01
In the present work, a method to direct the X-ray beam in real time to the desired locations in the cargo to increase penetration and reduce exclusion zone is presented. Cargo scanners employ high energy X-rays to produce radiographic images of the cargo. Most new scanners employ dual-energy to produce, in addition to attenuation maps, atomic number information in order to facilitate the detection of contraband. The electron beam producing the bremsstrahlung X-ray beam is usually directed approximately to the center of the container, concentrating the highest X-ray intensity to that area. Other parts of the container are exposed to lower radiation levels due to the large drop-off of the bremsstrahlung radiation intensity as a function of angle, especially for high energies (>6 MV). This results in lower penetration in these areas, requiring higher power sources that increase the dose and exclusion zone. The capability to modulate the X-ray source intensity on a pulse-by-pulse basis to deliver only as much radiation as required to the cargo has been reported previously. This method is, however, controlled by the most attenuating part of the inspected slice, resulting in excessive radiation to other areas of the cargo. A method to direct a dual-energy beam has been developed to provide a more precisely controlled level of required radiation to highly attenuating areas. The present method is based on steering the dual-energy electron beam using magnetic components on a pulse-to-pulse basis to a fixed location on the X-ray production target, but incident at different angles so as to direct the maximum intensity of the produced bremsstrahlung to the desired locations. The details of the technique and subsystem and simulation results are presented.
Production of High-Purity Anhydrous Nickel(II) Perrhenate for Tungsten-Based Sintered Heavy Alloys
Leszczyńska-Sejda, Katarzyna; Benke, Grzegorz; Kopyto, Dorota; Majewski, Tomasz; Drzazga, Michał
2017-01-01
This paper presents a method for the production of high-purity anhydrous nickel(II) perrhenate. The method comprises sorption of nickel(II) ions from aqueous nickel(II) nitrate solutions, using strongly acidic C160 cation exchange resin, and subsequent elution of sorbed nickel(II) ions using concentrated perrhenic acid solutions. After the neutralization of the resulting rhenium-nickel solutions, hydrated nickel(II) perrhenate is then separated and then dried at 160 °C to obtain the anhydrous form. The resulting compound is reduced in an atmosphere of dissociated ammonia in order to produce a Re-Ni alloy powder. This study provides information on the selected properties of the resulting Re-Ni powder. This powder was used as a starting material for the production of 77W-20Re-3Ni heavy alloys. Microstructure examination results and selected properties of the produced sintered heavy alloys were compared to sintered alloys produced using elemental W, Re, and Ni powders. This study showed that the application of anhydrous nickel(II) perrhenate in the production of 77W-20Re-3Ni results in better properties of the sintered alloys compared to those made from elemental powders. PMID:28772808
Preventing collapse of external auditory meatus during audiometry.
Pearlman, R C
1975-11-01
Occlusion of the external auditory meatus resulting from earphone pressure can produce a pseudoconductive hearing loss. I describe a method for detecting ear canal collapse by otoscopy and I suggest a method of correcting the problem with a polyethylene tube prosthesis.
Friend, Julie; Elander, Richard T.; Tucker, III; Melvin P.; Lyons, Robert C.
2010-10-26
A method for treating biomass was developed that uses an apparatus which moves a biomass and dilute aqueous ammonia mixture through reaction chambers without compaction. The apparatus moves the biomass using a non-compressing piston. The resulting treated biomass is saccharified to produce fermentable sugars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.
2012-04-17
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average ofmore » the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.« less
Chemical method for producing smooth surfaces on silicon wafers
Yu, Conrad
2003-01-01
An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).
Methods for the synthesis of deuterated vinyl pyridine monomers
Hong, Kunlun; Yang, Jun; Bonnesen, Peter V
2014-02-25
Methods for synthesizing deuterated vinylpyridine compounds of the Formula (1), wherein the method includes: (i) deuterating an acyl pyridine of the Formula (2) in the presence of a metal catalyst and D.sub.2O, wherein the metal catalyst is active for hydrogen exchange in water, to produce a deuterated acyl compound of Formula (3); (ii) reducing the compound of Formula (3) with a deuterated reducing agent to convert the acyl group to an alcohol group, and (iii) dehydrating the compound produced in step (ii) with a dehydrating agent to afford the vinylpyridine compound of Formula (1). The resulting deuterated vinylpyridine compounds are also described.
Methods for the synthesis of deuterated vinyl pyridine monomers
Hong, Kunlun; Yang, Jun; Bonnesen, Peter V
2015-01-13
Methods for synthesizing deuterated vinylpyridine compounds of the Formula (1), wherein the method includes: (i) deuterating an acyl pyridine of the Formula (2) in the presence of a metal catalyst and D.sub.2O, wherein the metal catalyst is active for hydrogen exchange in water, to produce a deuterated acyl compound of Formula (3); (ii) reducing the compound of Formula (3) with a deuterated reducing agent to convert the acyl group to an alcohol group, and (iii) dehydrating the compound produced in step (ii) with a dehydrating agent to afford the vinylpyridine compound of Formula (1). The resulting deuterated vinylpyridine compounds are also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blue, C.A.; Sikka, V.K.; Chun, Jung-Hoon
1997-04-01
The uniform-droplet process is a new method of liquid-metal atomization that results in single droplets that can be used to produce mono-size powders or sprayed-on to substrates to produce near-net shapes with tailored microstructure. The mono-sized powder-production capability of the uniform-droplet process also has the potential of permitting engineered powder blends to produce components of controlled porosity. Metal and alloy powders are commercially produced by at least three different methods: gas atomization, water atomization, and rotating disk. All three methods produce powders of a broad range in size with a very small yield of fine powders with single-sized droplets thatmore » can be used to produce mono-size powders or sprayed-on substrates to produce near-net shapes with tailored microstructures. The economical analysis has shown the process to have the potential of reducing capital cost by 50% and operating cost by 37.5% when applied to powder making. For the spray-forming process, a 25% savings is expected in both the capital and operating costs. The project is jointly carried out at Massachusetts Institute of Technology (MIT), Tuffs University, and Oak Ridge National Laboratory (ORNL). Preliminary interactions with both finished parts and powder producers have shown a strong interest in the uniform-droplet process. Systematic studies are being conducted to optimize the process parameters, understand the solidification of droplets and spray deposits, and develop a uniform-droplet-system (UDS) apparatus appropriate for processing engineering alloys.« less
Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.
2009-01-01
Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
Sohaib, Ali; Farooq, Abdul R; Atkinson, Gary A; Smith, Lyndon N; Smith, Melvyn L; Warr, Robert
2013-03-01
This paper proposes and describes an implementation of a photometric stereo-based technique for in vivo assessment of three-dimensional (3D) skin topography in the presence of interreflections. The proposed method illuminates skin with red, green, and blue colored lights and uses the resulting variation in surface gradients to mitigate the effects of interreflections. Experiments were carried out on Caucasian, Asian, and African American subjects to demonstrate the accuracy of our method and to validate the measurements produced by our system. Our method produced significant improvement in 3D surface reconstruction for all Caucasian, Asian, and African American skin types. The results also illustrate the differences in recovered skin topography due to the nondiffuse bidirectional reflectance distribution function (BRDF) for each color illumination used, which also concur with the existing multispectral BRDF data available for skin.
An effective hair detection algorithm for dermoscopic melanoma images of skin lesions
NASA Astrophysics Data System (ADS)
Chakraborti, Damayanti; Kaur, Ravneet; Umbaugh, Scott; LeAnder, Robert
2016-09-01
Dermoscopic images are obtained using the method of skin surface microscopy. Pigmented skin lesions are evaluated in terms of texture features such as color and structure. Artifacts, such as hairs, bubbles, black frames, ruler-marks, etc., create obstacles that prevent accurate detection of skin lesions by both clinicians and computer-aided diagnosis. In this article, we propose a new algorithm for the automated detection of hairs, using an adaptive, Canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. The algorithm was applied to 50 dermoscopic melanoma images. In order to ascertain this method's relative detection accuracy, it was compared to the Razmjooy hair-detection method [1], using segmentation error (SE), true detection rate (TDR) and false positioning rate (FPR). The new method produced 6.57% SE, 96.28% TDR and 3.47% FPR, compared to 15.751% SE, 86.29% TDR and 11.74% FPR produced by the Razmjooy method [1]. Because of the 7.27-9.99% improvement in those parameters, we conclude that the new algorithm produces much better results for detecting thick, thin, dark and light hairs. The new method proposed here, shows an appreciable difference in the rate of detecting bubbles, as well.
2013-01-01
Background Many proteins and peptides have been used in therapeutic or industrial applications. They are often produced in microbial production hosts by fermentation. Robust protein production in the hosts and efficient downstream purification are two critical factors that could significantly reduce cost for microbial protein production by fermentation. Producing proteins/peptides as inclusion bodies in the hosts has the potential to achieve both high titers in fermentation and cost-effective downstream purification. Manipulation of the host cells such as overexpression/deletion of certain genes could lead to producing more and/or denser inclusion bodies. However, there are limited screening methods to help to identify beneficial genetic changes rendering more protein production and/or denser inclusion bodies. Results We report development and optimization of a simple density gradient method that can be used for distinguishing and sorting E. coli cells with different buoyant densities. We demonstrate utilization of the method to screen genetic libraries to identify a) expression of glyQS loci on plasmid that increased expression of a peptide of interest as well as the buoyant density of inclusion body producing E. coli cells; and b) deletion of a host gltA gene that increased the buoyant density of the inclusion body produced in the E. coli cells. Conclusion A novel density gradient sorting method was developed to screen genetic libraries. Beneficial host genetic changes could be exploited to improve recombinant protein expression as well as downstream protein purification. PMID:23638724
Coggins, Christopher R E; Merski, Jerome A; Oldham, Michael J
2013-01-01
Recent technological advances allow ventilation holes in (or adjacent to) cigarette filters to be produced using lasers instead of using the mechanical procedures of earlier techniques. Analytical chemistry can be used to compare the composition of mainstream smoke from experimental cigarettes having filters with mechanically produced ventilation holes to that of cigarettes with ventilation holes that were produced using laser technology. Established procedures were used to analyze the smoke composition of 38 constituents of mainstream smoke generated using standard conditions. There were no differences between the smoke composition of cigarettes with filter ventilation holes that were produced mechanically or through use of laser technology. The two methods for producing ventilation holes in cigarette filters are equivalent in terms of resulting mainstream smoke chemistry, at two quite different filter ventilation percentages.
Remote sensing imagery classification using multi-objective gravitational search algorithm
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2016-10-01
Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.
Measurement of "total" microcystins using the MMPB/LC/MS ...
The detection and quantification of microcystins, a family of toxins associated with harmful algal blooms, is complicated by their structural diversity and a lack of commercially available analytical standards for method development. As a result, most detection methods have focused on either a subset of microcystin congeners, as in US EPA Method 544, or on techniques which are sensitive to structural features common to most microcystins, as in the anti-ADDA ELISA method. A recent development has been the use of 2-methyl-3-methoxy-4-phenylbutyric acid (MMPB), which is produced by chemical oxidation the ADDA moiety in most microcystin congeners, as a proxy for the sum of congeners present. Conditions for the MMPB derivatization were evaluated and applied to water samples obtained from various HAB impacted surface waters, and results were compared with congener-based LC/MS/MS and ELISA methods. The detection and quantification of microcystins, a family of toxins associated with harmful algal blooms, is complicated by their structural diversity and a lack of commercially available analytical standards for method development. As a result, most detection methods have focused on either a subset of microcystin congeners, as in US EPA Method 544, or on techniques which are sensitive to structural features common to most microcystins, as in the anti-ADDA ELISA method. A recent development has been the use of 2-methyl-3-methoxy-4-phenylbutyric acid (MMPB), which is produce
Models of convection-driven tectonic plates - A comparison of methods and results
NASA Technical Reports Server (NTRS)
King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.
1992-01-01
Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.
Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R
2008-01-01
Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Methods Thirty-one Caucasian men (22.5 ± 2.7 yrs; 175.6 ± 6.3 cm; 76.4 ± 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex® 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Conclusion Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population. PMID:18426582
Fracture toughness of advanced ceramics at room temperature
NASA Technical Reports Server (NTRS)
Quinn, George D.; Salem, Jonathan; Bar-On, Isa; Cho, Kyu; Foley, Michael; Fang, HO
1992-01-01
Results of round-robin fracture toughness tests on advanced ceramics are reported. A gas-pressure silicon nitride and a zirconia-toughened alumina were tested using three test methods: indentation fracture, indentation strength, and single-edge precracked beam. The latter two methods have produced consistent results. The interpretation of fracture toughness test results for the zirconia alumina composite is shown to be complicated by R-curve and environmentally assisted crack growth phenomena.
DHAD variants and methods of screening
Kelly, Kristen J.; Ye, Rick W.
2017-02-28
Methods of screening for dihydroxy-acid dehydratase (DHAD) variants that display increased DHAD activity are disclosed, along with DHAD variants identified by these methods. Such enzymes can result in increased production of compounds from DHAD requiring biosynthetic pathways. Also disclosed are isolated nucleic acids encoding the DHAD variants, recombinant host cells comprising the isolated nucleic acid molecules, and methods of producing butanol.
Liu, Rui-Sang; Jin, Guang-Huai; Xiao, Deng-Rong; Li, Hong-Mei; Bai, Feng-Wu; Tang, Ya-Jie
2015-01-01
Aroma results from the interplay of volatile organic compounds (VOCs) and the attributes of microbial-producing aromas are significantly affected by fermentation conditions. Among the VOCs, only a few of them contribute to aroma. Thus, screening and identification of the key VOCs is critical for microbial-producing aroma. The traditional method is based on gas chromatography-olfactometry (GC-O), which is time-consuming and laborious. Considering the Tuber melanosporum fermentation system as an example, a new method to screen and identify the key VOCs by combining the aroma evaluation method with principle component analysis (PCA) was developed in this work. First, an aroma sensory evaluation method was developed to screen 34 potential favorite aroma samples from 504 fermentation samples. Second, PCA was employed to screen nine common key VOCs from these 34 samples. Third, seven key VOCs were identified by the traditional method. Finally, all of the seven key VOCs identified by the traditional method were also identified, along with four others, by the new strategy. These results indicate the reliability of the new method and demonstrate it to be a viable alternative to the traditional method. PMID:26655663
Yang, Dan-Dan; Li, Qian; Huang, Jing-Jing; Chen, Min
2012-11-01
Soil and saline water samples were collected from the Daishan Saltern of East China, and the halophilic bacteria were isolated and cultured by using selective media, aimed to investigate the diversity and enzyme-producing activity of culturable halophilic bacteria in saltern environment. A total of 181 strains were isolated by culture-dependent method. Specific primers were used to amplify the 16S rRNA gene of bacteria and archaea. The operation taxonomy units (OTUs) were determined by ARDRA method, and the representative strain of each OTU was sequenced. The phylogenetic position of all the isolated strains was determined by 16S rRNA sequencing. The results showed that the isolated 181 strains displayed 21 operational taxonomic units (OTUs), of which, 12 OTUs belonged to halophilic bacteria, and the others belonged to halophilic archaea. Phylogenetic analysis indicated that there were 7 genera presented among the halophilic bacteria group, and 4 genera presented among the halophilic archaea group. The dominant halophilic strains were of Halomonas and Haloarcula, with 46.8% in halophilic bacteria and 49.1% in halophilic archaea group, respectively. Enzyme-producing analysis indicated that most strains displayed enzyme-producing activity, including the activities of producing amylase, proteinase and lipase, and the dominant strains capable of enzyme-producing were of Haloarcula. Our results showed that in the environment of Daishan Saltern, there existed a higher diversity of halophilic bacteria, being a source sink for screening enzyme-producing bacterial strains.
Calculation of forces on magnetized bodies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Sheerer, John
1987-01-01
The methods described may be used with a high degree of confidence for calculations of magnetic traction forces normal to a surface. In this circumstance all models agree, and test cases have resulted in theoretically correct results. It is shown that the tangential forces are in practice negligible. The surface pole method is preferable to the virtual work method because of the necessity for more than one NASTRAN run in the latter case, and because distributed forces are obtained. The derivation of local forces from the Maxwell stress method involves an undesirable degree of manipulation of the problem and produces a result in contradiction of the surface pole method.
A compendium of controlled diffusion blades generated by an automated inverse design procedure
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1989-01-01
A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.
Coherent Lienard-Wiechert fields produced by free electron lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elias, L.R.; Gallardo, J.C.
1981-12-01
Results are presented here of a three-dimensional numerical analysis of the radiation fields produced in a free electron laser. The method used here to obtain the spatial and temporal behavior of the radiated fields is based on the coherent superposition of the exact Lienard-Wiechert fields produced by each electron in the beam. Interference effects are responsible for the narrow angular radiation patterns obtained and for the high degree of monochromaticity of the radiated field.
The effect of the blackout method on acquisition and generalization1
Wildemann, Donald G.; Holland, James G.
1973-01-01
In discrimination training with the Lyons' blackout method, pecks to the negative stimulus are prevented by darkening the chamber each time the subject approaches the negative stimulus. Stimulus generalization along a stimulus dimension was measured after training with this method. For comparison, generalization was also measured after reinforced responding to the positive stimulus without discrimination training, and after discrimination training by extinction of pecks to the negative stimulus. The blackout procedure and the extinction of pecks to the negative stimulus both produced a peak shift in the generalization gradients. The results suggest that after discrimination training in which the positive and negative stimulus are on the same continuum, the blackout method produces extinction-like effects on generalization tests. PMID:16811655
Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.
Mirzaei, Sajad; Wu, Yufeng
2016-01-01
Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.
Comparative method of protein expression and isolation of EBV epitope in E.coli DH5α
NASA Astrophysics Data System (ADS)
Anyndita, Nadya V. M.; Dluha, Nurul; Himmah, Karimatul; Rifa'i, Muhaimin; Widodo
2017-11-01
Epstein-Barr Virus (EBV) or human herpes virus 4 (HHV-4) is a virus that infects human B cell and leads to nasopharyngeal carcinoma (NPC). The prevention of this disease remains unsuccessful since the vaccine has not been discovered. The objective of this study is to over-produce EBV gp350/220 epitope using several methods in E.coli DH5α. EBV epitope sequences were inserted into pMAL-p5x vector, then transformed into DH5α E.coli and over-produced using 0.3, 1 and 2 mM IPTG. Plasmid transformation was validated using AflIII restriction enzyme in 0.8% agarose. Periplasmic protein was isolated using 2 comparative methods and then analyzed using SDS-PAGE. Method A produced a protein band around 50 kDa and appeared only at transformant. Method B failed to isolate the protein, indicated by no protein band appearing. In addition, any variations in IPTG concentration didn't give a different result. Thus it can be concluded that even the lowest IPTG concentration is able to induce protein expression.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Iterative CT reconstruction using coordinate descent with ordered subsets of data
NASA Astrophysics Data System (ADS)
Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.
2016-04-01
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.
Method and apparatus for fault tolerance
NASA Technical Reports Server (NTRS)
Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)
1993-01-01
A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.
Analysis of Wien filter spectra from Hall thruster plumes.
Huang, Wensheng; Shastry, Rohit
2015-07-01
A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.
Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei
2017-01-01
This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L16(45) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm3 and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments. PMID:29125576
Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei
2017-11-10
This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L 16 (4⁵) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm³ and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments.
Method for producing rapid pH changes
Clark, John H.; Campillo, Anthony J.; Shapiro, Stanley L.; Winn, Kenneth R.
1981-01-01
A method of initiating a rapid pH change in a solution by irradiating the solution with an intense flux of electromagnetic radiation of a frequency which produces a substantial pK change to a compound in solution. To optimize the resulting pH change, the compound being irradiated in solution should have an excited state lifetime substantially longer than the time required to establish an excited state acid-base equilibrium in the solution. Desired pH changes can be accomplished in nanoseconds or less by means of picosecond pulses of laser radiation.
Method for producing rapid pH changes
Clark, J.H.; Campillo, A.J.; Shapiro, S.L.; Winn, K.R.
A method of initiating a rapid pH change in a solution comprises irradiating the solution with an intense flux of electromagnetic radiation of a frequency which produces a substantial pK change to a compound in solution. To optimize the resulting pH change, the compound being irradiated in solution should have an excited state lifetime substantially longer than the time required to establish an excited state acid-base equilibrium in the solution. Desired pH changes can be accomplished in nanoseconds or less by means of picosecond pulses of laser radiation.
Conventionally cast and forged copper alloy for high-heat-flux thrust chambers
NASA Technical Reports Server (NTRS)
Kazaroff, John M.; Repas, George A.
1987-01-01
The combustion chamber liner of the space shuttle main engine is made of NARloy-Z, a copper-silver-zirconium alloy. This alloy was produced by vacuum melting and vacuum centrifugal casting; a production method that is currently now available. Using conventional melting, casting, and forging methods, NASA has produced an alloy of the same composition called NASA-Z. This report compares the composition, microstructure, tensile properties, low-cycle fatigue life, and hot-firing life of these two materials. The results show that the materials have similar characteristics.
Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system
NASA Technical Reports Server (NTRS)
Turner, Travis L.
1991-01-01
A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.
Fabrication of titanium thermal protection system panels by the NOR-Ti-bond process
NASA Technical Reports Server (NTRS)
Wells, R. R.
1971-01-01
A method for fabricating titanium thermal protection system panels is described. The method has the potential for producing wide faying surface bonds to minimize temperature gradients and thermal stresses resulting during service at elevated temperatures. Results of nondestructive tests of the panels are presented. Concepts for improving the panel quality and for improved economy in production are discussed.
NASA Astrophysics Data System (ADS)
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.
2014-10-01
A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.
Martin-StPaul, N K; Longepierre, D; Huc, R; Delzon, S; Burlett, R; Joffre, R; Rambal, S; Cochard, H
2014-08-01
Three methods are in widespread use to build vulnerability curves (VCs) to cavitation. The bench drying (BD) method is considered as a reference because embolism and xylem pressure are measured on large branches dehydrating in the air, in conditions similar to what happens in nature. Two other methods of embolism induction have been increasingly used. While the Cavitron (CA) uses centrifugal force to induce embolism, in the air injection (AI) method embolism is induced by forcing pressurized air to enter a stem segment. Recent studies have suggested that the AI and CA methods are inappropriate in long-vesselled species because they produce a very high-threshold xylem pressure for embolism (e.g., P50) compared with what is expected from (i) their ecophysiology in the field (native embolism, water potential and stomatal response to xylem pressure) and (ii) the P50 obtained with the BD method. However, other authors have argued that the CA and AI methods may be valid because they produce VCs similar to the BD method. In order to clarify this issue, we assessed VCs with the three above-mentioned methods on the long-vesselled Quercus ilex L. We showed that the BD VC yielded threshold xylem pressure for embolism consistent with in situ measurements of native embolism, minimal water potential and stomatal conductance. We therefore concluded that the BD method provides a reliable estimate of the VC for this species. The CA method produced a very high P50 (i.e., less negative) compared with the BD method, which is consistent with an artifact related to the vessel length. The VCs obtained with the AI method were highly variable, producing P50 ranging from -2 to -8.2 MPa. This wide variability was more related to differences in base diameter among samples than to differences in the length of samples. We concluded that this method is probably subject to an artifact linked to the distribution of vessel lengths within the sample. Overall, our results indicate that the CA and the AI should be used with extreme caution on long-vesselled species. Our results also highlight that several criteria may be helpful to assess the validity of a VC. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Peng, Linda X; Wallace, Morgan; Andaloro, Bridget; Fallon, Dawn; Fleck, Lois; Delduco, Dan; Tice, George
2011-01-01
The BAX System PCR assay for Salmonella detection in foods was previously validated as AOAC Research Institute (RI) Performance Tested Method (PTM) 100201. New studies were conducted on beef and produce using the same media and protocol currently approved for the BAX System PCR assay for E. coli O157:H7 multiplex (MP). Additionally, soy protein isolate was tested for matrix extension using the U.S. Food and Drug Administration-Bacteriological Analytical Manual (FDA-BAM) enrichment protocols. The studies compared the BAX System method to the U.S. Department of Agriculture culture method for detecting Salmonella in beef and the FDA-BAM culture method for detecting Salmonella in produce and soy protein isolate. Method comparison studies on low-level inoculates showed that the BAX System assay for Salmonella performed as well as or better than the reference method for detecting Salmonella in beef and produce in 8-24 h enrichment when the BAX System E. coli O157:H7 MP media was used, and soy protein isolate in 20 h enrichment with lactose broth followed by 3 h regrowth in brain heart infusion broth. An inclusivity panel of 104 Salmonella strains with diverse serotypes was tested by the BAX System using the proprietary BAX System media and returned all positive results. Ruggedness factors involved in the enrichment phase were also evaluated by testing outside the specified parameters, and none of the factors examined affected the performance of the assay.
Exact Electromagnetic Fields Produced by a Finite Wire with Constant Current
ERIC Educational Resources Information Center
Jimenez, J. L.; Campos, I.; Aquino, N.
2008-01-01
We solve exactly the problem of calculating the electromagnetic fields produced by a finite wire with a constant current, by using two methods: retarded potentials and Jefimenko's formalism. One result in this particular case is that the usual Biot-Savart law of magnetostatics gives the correct magnetic field of the problem. We also show…
Synthesis and reactivity of ultra-fine coal liquefaction catalysts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linehan, J.C.; Matson, D.W.; Fulton, J.L.
1992-10-01
The Pacific Northwest Laboratory is currently developing ultra-fine iron-based coal liquefaction catalysts using two new particle production technologies: (1) modified reverse micelles (MRM) and (2) rapid thermal decomposition of solutes (RTDS). These methodologies have been shown to allow control over both particle size (from 1 nm to 60 nm) and composition when used to produce ultra-fine iron-based materials. Powders produced using these methods are found to be selective catalysts for carbon-carbon bond scission using the naphthyl bibenzylmethane model compound, and to promote the production of THF soluble coal products during liquefaction studies. This report describes the materials produced by bothmore » MRM and the RTDS methods and summarizes the results of preliminary catalysis studies using these materials.« less
Water surface modeling from a single viewpoint video.
Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip
2013-07-01
We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.
Ahmed, Farooq; Ayoub Arbab, Alvira; Jatoi, Abdul Wahab; Khatri, Muzamil; Memon, Najma; Khatri, Zeeshan; Kim, Ick Soo
2017-05-01
Herein we report a rapid method for deacetylation of cellulose acetate (CA) nanofibers in order to produce cellulose nanofibers using ultrasonic energy. The CA nanofibers were fabricated via electrospinning thereby treated with NaOH and NaOH/EtOH solutions at various pH levels for 30, 60 and 90min assisted by ultrasonic energy. The nanofiber webs were optimized by degree of deacetylation (DD%) and wicking behavior. The resultant nanofibers were further characterized by FTIR, SEM, WAXD, DSC analysis. The DD% and FTIR results confirmed a complete conversion of CA nanofibers to cellulose nanofibers within 1h with substantial increase of wicking height. Nanofibers morphology under SEM showed slightly swelling and no damage of nanofibers observed by use of ultrasonic energy. The results of ultrasonic-assisted deacetylation are comparable with the conventional deacetylation. Our rapid method offers substantially reduced deacetylation time from 30h to just 1h, thanks to the ultrasonic energy. Copyright © 2016 Elsevier B.V. All rights reserved.
The comparison of rapid bioassays for the assessment of urban groundwater quality.
Dewhurst, R E; Wheeler, J R; Chummun, K S; Mather, J D; Callaghan, A; Crane, M
2002-05-01
Groundwater is a complex mixture of chemicals that is naturally variable. Current legislation in the UK requires that groundwater quality and the degree of contamination are assessed using chemical methods. Such methods do not consider the synergistic or antagonistic interactions that may affect the bioavailability and toxicity of pollutants in the environment. Bioassays are a method for assessing the toxic impact of whole groundwater samples on the environment. Three rapid bioassays, Eclox, Microtox and ToxAlert, and a Daphnia magna 48-h immobilisation test were used to assess groundwater quality from sites with a wide range of historical uses. Eclox responses indicated that the test was very sensitive to changes in groundwater chemistry; 77% of the results had a percentage inhibition greater than 90%. ToxAlert, although suitable for monitoring changes in water quality under laboratory conditions, produced highly variable results due to fluctuations in temperature and the chemical composition of the samples. Microtox produced replicable results that correlated with those from D. magna tests.
Alavi, Shiva; Kachuie, Marzie
2017-01-01
Background: This study was conducted to assess the hardness of orthodontic brackets produced by metal injection molding (MIM) and conventional methods and different orthodontic wires (stainless steel, nickel-titanium [Ni-Ti], and beta-titanium alloys) for better clinical results. Materials and Methods: A total of 15 specimens from each brand of orthodontic brackets and wires were examined. The brackets (Elite Opti-Mim which is produced by MIM process and Ultratrimm which is produced by conventional brazing method) and the wires (stainless steel, Ni-Ti, and beta-titanium) were embedded in epoxy resin, followed by grinding, polishing, and coating. Then, X-ray energy dispersive spectroscopy (EDS) microanalysis was applied to assess their elemental composition. The same specimen surfaces were repolished and used for Vickers microhardness assessment. Hardness was statistically analyzed with Kruskal–Wallis test, followed by Mann–Whitney test at the 0.05 level of significance. Results: The X-ray EDS analysis revealed different ferrous or co-based alloys in each bracket. The maximum mean hardness values of the wires were achieved for stainless steel (SS) (529.85 Vickers hardness [VHN]) versus the minimum values for beta-titanium (334.65 VHN). Among the brackets, Elite Opti-Mim exhibited significantly higher VHN values (262.66 VHN) compared to Ultratrimm (206.59 VHN). VHN values of wire alloys were significantly higher than those of the brackets. Conclusion: MIM orthodontic brackets exhibited hardness values much lower than those of SS orthodontic archwires and were more compatible with NiTi and beta-titanium archwires. A wide range of microhardness values has been reported for conventional orthodontic brackets and it should be considered that the manufacturing method might be only one of the factors affecting the mechanical properties of orthodontic brackets including hardness. PMID:28928783
Talari, Roya; Varshosaz, Jaleh; Mostafavi, Seyed Abolfazl; Nokhodchi, Ali
2009-01-01
The micronization using milling process to enhance dissolution rate is extremely inefficient due to a high energy input, and disruptions in the crystal lattice which can cause physical or chemical instability. Therefore, the aim of the present study is to use in situ micronization process through pH change method to produce micron-size gliclazide particles for fast dissolution hence better bioavailability. Gliclazide was recrystallized in presence of 12 different stabilizers and the effects of each stabilizer on micromeritic behaviors, morphology of microcrystals, dissolution rate and solid state of recrystallized drug particles were investigated. The results showed that recrystallized samples showed faster dissolution rate than untreated gliclazide particles and the fastest dissolution rate was observed for the samples recrystallized in presence of PEG 1500. Some of the recrystallized drug samples in presence of stabilizers dissolved 100% within the first 5 min showing at least 10 times greater dissolution rate than the dissolution rate of untreated gliclazide powders. Micromeritic studies showed that in situ micronization technique via pH change method is able to produce smaller particle size with a high surface area. The results also showed that the type of stabilizer had significant impact on morphology of recrystallized drug particles. The untreated gliclazide is rod or rectangular shape, whereas the crystals produced in presence of stabilizers, depending on the type of stabilizer, were very fine particles with irregular, cubic, rectangular, granular and spherical/modular shape. The results showed that crystallization of gliclazide in presence of stabilizers reduced the crystallinity of the samples as confirmed by XRPD and DSC results. In situ micronization of gliclazide through pH change method can successfully be used to produce micron-sized drug particles to enhance dissolution rate.
NASA Astrophysics Data System (ADS)
Hashemi Shahraki, Zahra; Sharififard, Hakimeh; Lashanizadegan, Asghar
2018-05-01
In order to produce activated carbon from grape stalks, this biomass was activated chemically with KOH. Identification methods including FTIR, BET, SEM, Boehm titration and pHzpc measurement were applied to characterize the produced carbon. The adsorption ability of produced activated carbon toward cadmium removal from aqueous solution was evaluated by using Central Composite Design methodology and the effects of process parameters were analysed, as well as, the optimum processing conditions were determined using statistical methods. In order to characterize the equilibrium behaviour of adsorption process, the equilibrium data were analysed by Langmuir, Freundlich, and R-D isotherm models. Results indicated that the adsorption process is a monolayer process and the adsorption capacity of prepared activated carbon was 140.84 mg L‑1. Analysis of kinetics data showed that the pseudo-second-order and Elovich models were well fitted with the kinetics results and this suggests the domination of chemical adsorption. The regenerability results showed that the prepared activated carbon has a reasonable adsorption capacity toward cadmium after five adsorption/desorption cycles.
Báez, Daniela F.; Pardo, Helena; Laborda, Ignacio; Marco, José F.; Yáñez, Claudia; Bollo, Soledad
2017-01-01
For the first time a critical analysis of the influence that four different graphene oxide reduction methods have on the electrochemical properties of the resulting reduced graphene oxides (RGOs) is reported. Starting from the same graphene oxide, chemical (CRGO), hydrothermal (hTRGO), electrochemical (ERGO), and thermal (TRGO) reduced graphene oxide were produced. The materials were fully characterized and the topography and electroactivity of the resulting glassy carbon modified electrodes were also evaluated. An oligonucleotide molecule was used as a model of DNA electrochemical biosensing. The results allow for the conclusion that TRGO produced the RGOs with the best electrochemical performance for oligonucleotide electroanalysis. A clear shift in the guanine oxidation peak potential to lower values (~0.100 V) and an almost two-fold increase in the current intensity were observed compared with the other RGOs. The electrocatalytic effect has a multifactorial explanation because the TRGO was the material that presented a higher polydispersity and lower sheet size, thus exposing a larger quantity of defects to the electrode surface, which produces larger physical and electrochemical areas. PMID:28677654
Lasfargues, Mathieu; Stead, Graham; Amjad, Muhammad; Ding, Yulong; Wen, Dongsheng
2017-01-01
Seeding nanoparticles in molten salts has been shown recently as a promising way to improve their thermo-physical properties. The prospect of such technology is of interest to both academic and industrial sectors in order to enhance the specific heat capacity of molten salt. The latter is used in concentrated solar power plants as both heat transfer fluid and sensible storage. This work explores the feasibility of producing and dispersing nanoparticles with a novel one pot synthesis method. Using such a method, CuO nanoparticles were produced in situ via the decomposition of copper sulphate pentahydrate in a KNO3-NaNO3 binary salt. Analyses of the results suggested preferential disposition of atoms around produced nanoparticles in the molten salt. Thermal characterization of the produced nano-salt suspension indicated the dependence of the specific heat enhancement on particle morphology and distribution within the salts. PMID:28772910
Lasfargues, Mathieu; Stead, Graham; Amjad, Muhammad; Ding, Yulong; Wen, Dongsheng
2017-05-19
Seeding nanoparticles in molten salts has been shown recently as a promising way to improve their thermo-physical properties. The prospect of such technology is of interest to both academic and industrial sectors in order to enhance the specific heat capacity of molten salt. The latter is used in concentrated solar power plants as both heat transfer fluid and sensible storage. This work explores the feasibility of producing and dispersing nanoparticles with a novel one pot synthesis method. Using such a method, CuO nanoparticles were produced in situ via the decomposition of copper sulphate pentahydrate in a KNO₃-NaNO₃ binary salt. Analyses of the results suggested preferential disposition of atoms around produced nanoparticles in the molten salt. Thermal characterization of the produced nano-salt suspension indicated the dependence of the specific heat enhancement on particle morphology and distribution within the salts.
Nguyen, Huy Truong; Min, Jung-Eun; Long, Nguyen Phuoc; Thanh, Ma Chi; Le, Thi Hong Van; Lee, Jeongmi; Park, Jeong Hill; Kwon, Sung Won
2017-08-05
Agarwood, the resinous heartwood produced by some Aquilaria species such as Aquilaria crassna, Aquilaria malaccensis and Aquilaria sinensis, has been traditionally and widely used in medicine, incenses and especially perfumes. However, up to now, the authentication of agarwood has been largely based on morphological characteristics, a method which is prone to errors and lacks reproducibility. Hence, in this study, we applied metabolomics and a genetic approach to the authentication of two common agarwood chips, those produced by Aquilaria crassna and Aquilaria malaccensis. Primary metabolites, secondary metabolites and DNA markers of agarwood were authenticated by 1 H NMR metabolomics, GC-MS metabolomics and DNA-based techniques, respectively. The results indicated that agarwood chips could be classified accurately by all the methods illustrated in this study. Additionally, the pros and cons of each method are also discussed. To the best of our knowledge, our research is the first study detailing all the differences in the primary and secondary metabolites, as well as the DNA markers between the agarwood produced by these two species. Copyright © 2017 Elsevier B.V. All rights reserved.
Robust Airfoil Optimization in High Resolution Design Space
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon L.
2003-01-01
The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.
A Method for Localizing Energy Dissipation in Blazars Using Fermi Variability
NASA Technical Reports Server (NTRS)
Dotson, Amanda; Georganopoulos, Markos; Kazanas, Demosthenes; Perlman, Eric S.
2013-01-01
The distance of the Fermi-detected blazar gamma-ray emission site from the supermassive black hole is a matter of active debate. Here we present a method for testing if the GeV emission of powerful blazars is produced within the sub-pc scale broad line region (BLR) or farther out in the pc-scale molecular torus (MT) environment. If the GeV emission takes place within the BLR, the inverse Compton (IC) scattering of the BLR ultraviolet (UV) seed photons that produces the gamma-rays takes place at the onset of the Klein-Nishina regime. This causes the electron cooling time to become practically energy independent and the variation of the gamma-ray emission to be almost achromatic. If on the other hand the -ray emission is produced farther out in the pc-scale MT, the IC scattering of the infrared (IR) MT seed photons that produces the gamma-rays takes place in the Thomson regime, resulting to energy-dependent electron cooling times, manifested as faster cooling times for higher Fermi energies. We demonstrate these characteristics and discuss the applicability and limitations of our method.
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
Graphene oxide and H2 production from bioelectrochemical graphite oxidation.
Lu, Lu; Zeng, Cuiping; Wang, Luda; Yin, Xiaobo; Jin, Song; Lu, Anhuai; Jason Ren, Zhiyong
2015-11-17
Graphene oxide (GO) is an emerging material for energy and environmental applications, but it has been primarily produced using chemical processes involving high energy consumption and hazardous chemicals. In this study, we reported a new bioelectrochemical method to produce GO from graphite under ambient conditions without chemical amendments, value-added organic compounds and high rate H2 were also produced. Compared with abiotic electrochemical electrolysis control, the microbial assisted graphite oxidation produced high rate of graphite oxide and graphene oxide (BEGO) sheets, CO2, and current at lower applied voltage. The resultant electrons are transferred to a biocathode, where H2 and organic compounds are produced by microbial reduction of protons and CO2, respectively, a process known as microbial electrosynthesis (MES). Pseudomonas is the dominant population on the anode, while abundant anaerobic solvent-producing bacteria Clostridium carboxidivorans is likely responsible for electrosynthesis on the cathode. Oxygen production through water electrolysis was not detected on the anode due to the presence of facultative and aerobic bacteria as O2 sinkers. This new method provides a sustainable route for producing graphene materials and renewable H2 at low cost, and it may stimulate a new area of research in MES.
Graphene oxide and H2 production from bioelectrochemical graphite oxidation
Lu, Lu; Zeng, Cuiping; Wang, Luda; Yin, Xiaobo; Jin, Song; Lu, Anhuai; Jason Ren, Zhiyong
2015-01-01
Graphene oxide (GO) is an emerging material for energy and environmental applications, but it has been primarily produced using chemical processes involving high energy consumption and hazardous chemicals. In this study, we reported a new bioelectrochemical method to produce GO from graphite under ambient conditions without chemical amendments, value-added organic compounds and high rate H2 were also produced. Compared with abiotic electrochemical electrolysis control, the microbial assisted graphite oxidation produced high rate of graphite oxide and graphene oxide (BEGO) sheets, CO2, and current at lower applied voltage. The resultant electrons are transferred to a biocathode, where H2 and organic compounds are produced by microbial reduction of protons and CO2, respectively, a process known as microbial electrosynthesis (MES). Pseudomonas is the dominant population on the anode, while abundant anaerobic solvent-producing bacteria Clostridium carboxidivorans is likely responsible for electrosynthesis on the cathode. Oxygen production through water electrolysis was not detected on the anode due to the presence of facultative and aerobic bacteria as O2 sinkers. This new method provides a sustainable route for producing graphene materials and renewable H2 at low cost, and it may stimulate a new area of research in MES. PMID:26573014
RAPID HEALTH-BASED METHOD FOR MEASURING MICROBIAL INDICATORS OF RECREATIONAL WATER QUALITY
Because the currently approved cultural methods for monitoring indicator bacteria in recreational water require 24 hours to produce results, the public may be exposed to potentially contaminated water before the water has been identified as hazardous. This project was initiated t...
A Rapid and Efficient Screening Method for Antibacterial Compound-Producing Bacteria.
Hettiarachchi, Sachithra; Lee, Su-Jin; Lee, Youngdeuk; Kwon, Young-Kyung; De Zoysa, Mahanama; Moon, Song; Jo, Eunyoung; Kim, Taeho; Kang, Do-Hyung; Heo, Soo-Jin; Oh, Chulhong
2017-08-28
Antibacterial compounds are widely used in the treatment of human and animal diseases. The overuse of antibiotics has led to a rapid rise in the prevalence of drug-resistant bacteria, making the development of new antibacterial compounds essential. This study focused on developing a fast and easy method for identifying marine bacteria that produce antibiotic compounds. Eight randomly selected marine target bacterial species ( Agrococcus terreus, Bacillus algicola, Mesoflavibacter zeaxanthinifaciens, Pseudoalteromonas flavipulchra, P. peptidolytica, P. piscicida, P. rubra , and Zunongwangia atlantica ) were tested for production of antibacterial compounds against four strains of test bacteria ( B. cereus, B. subtilis, Halomonas smyrnensis , and Vibrio alginolyticus ). Colony picking was used as the primary screening method. Clear zones were observed around colonies of P. flavipulchra, P. peptidolytica, P. piscicida , and P. rubra tested against B. cereus, B. subtilis , and H. smyrnensis . The efficiency of colony scraping and broth culture methods for antimicrobial compound extraction was also compared using a disk diffusion assay. P. peptidolytica, P. piscicida , and P. rubra showed antagonistic activity against H. smyrnensis, B. cereus , and B. subtilis , respectively, only in the colony scraping method. Our results show that colony picking and colony scraping are effective, quick, and easy methods of screening for antibacterial compound-producing bacteria.
A modular modulation method for achieving increases in metabolite production.
Acerenza, Luis; Monzon, Pablo; Ortega, Fernando
2015-01-01
Increasing the production of overproducing strains represents a great challenge. Here, we develop a modular modulation method to determine the key steps for genetic manipulation to increase metabolite production. The method consists of three steps: (i) modularization of the metabolic network into two modules connected by linking metabolites, (ii) change in the activity of the modules using auxiliary rates producing or consuming the linking metabolites in appropriate proportions and (iii) determination of the key modules and steps to increase production. The mathematical formulation of the method in matrix form shows that it may be applied to metabolic networks of any structure and size, with reactions showing any kind of rate laws. The results are valid for any type of conservation relationships in the metabolite concentrations or interactions between modules. The activity of the module may, in principle, be changed by any large factor. The method may be applied recursively or combined with other methods devised to perform fine searches in smaller regions. In practice, it is implemented by integrating to the producer strain heterologous reactions or synthetic pathways producing or consuming the linking metabolites. The new procedure may contribute to develop metabolic engineering into a more systematic practice. © 2015 American Institute of Chemical Engineers.
A new sampling method for fibre length measurement
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Li, Xianghong; Zhang, Junying
2018-06-01
This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.
Parks, Sean; Holsinger, Lisa M.; Voss, Morgan; Loehman, Rachel A.; Robinson, Nathaniel P.
2018-01-01
Landsat-based fire severity datasets are an invaluable resource for monitoring and research purposes. These gridded fire severity datasets are generally produced with pre-and post-fire imagery to estimate the degree of fire-induced ecological change. Here, we introduce methods to produce three Landsat-based fire severity metrics using the Google Earth Engine (GEE) platform: the delta normalized burn ratio (dNBR), the relativized delta normalized burn ratio (RdNBR), and the relativized burn ratio (RBR). Our methods do not rely on time-consuming a priori scene selection and instead use a mean compositing approach in which all valid pixels (e.g. cloud-free) over a pre-specified date range (pre- and post-fire) are stacked and the mean value for each pixel over each stack is used to produce the resulting fire severity datasets. This approach demonstrates that fire severity datasets can be produced with relative ease and speed compared the standard approach in which one pre-fire and post-fire scene are judiciously identified and used to produce fire severity datasets. We also validate the GEE-derived fire severity metrics using field-based fire severity plots for 18 fires in the western US. These validations are compared to Landsat-based fire severity datasets produced using only one pre- and post-fire scene, which has been the standard approach in producing such datasets since their inception. Results indicate that the GEE-derived fire severity datasets show improved validation statistics compared to parallel versions in which only one pre-fire and post-fire scene are used. We provide code and a sample geospatial fire history layer to produce dNBR, RdNBR, and RBR for the 18 fires we evaluated. Although our approach requires that a geospatial fire history layer (i.e. fire perimeters) be produced independently and prior to applying our methods, we suggest our GEE methodology can reasonably be implemented on hundreds to thousands of fires, thereby increasing opportunities for fire severity monitoring and research across the globe.
Multiratio fusion change detection with adaptive thresholding
NASA Astrophysics Data System (ADS)
Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.
2017-04-01
A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.
Method for synthesis of titanium dioxide nanotubes using ionic liquids
Qu, Jun; Luo, Huimin; Dai, Sheng
2013-11-19
The invention is directed to a method for producing titanium dioxide nanotubes, the method comprising anodizing titanium metal in contact with an electrolytic medium containing an ionic liquid. The invention is also directed to the resulting titanium dioxide nanotubes, as well as devices incorporating the nanotubes, such as photovoltaic devices, hydrogen generation devices, and hydrogen detection devices.
Methodological comparison of alpine meadow evapotranspiration on the Tibetan Plateau, China.
Chang, Yaping; Wang, Jie; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Liu, Fengjing; Zhang, Shiqiang
2017-01-01
Estimation of evapotranspiration (ET) for alpine meadow areas in the Tibetan Plateau (TP) is essential for water resource management. However, observation data has been limited due to the extreme climates and complex terrain of this region. To address these issues, four representative methods, Penman-Monteith (PM), Priestley-Taylor (PT), Hargreaves-Samani (HS), and Mahringer (MG) methods, were adopted to estimate ET, which were then compared with ET measured using Eddy Covariance (EC) for five alpine meadow sites during the growing seasons from 2010 to 2014. And each site was measured for one growing season during this period. The results demonstrate that the PT method outperformed at all sites with a coefficient of determination (R2) ranging from 0.76 to 0.94 and root mean square error (RMSE) ranging from 0.41 to 0.62 mm d-1. The PM method showed better performance than HS and MG methods, and the HS method produced relatively acceptable results with higher R2 (0.46) and lower RMSE (0.89 mm d-1) compared to MG method with R2 of 0.16 and RMSE of 1.62 mm d-1, while MG underestimated ET at all alpine meadow sites. Therefore, the PT method, being the simpler approach and less data dependent, is recommended to estimate ET for alpine meadow areas in the Tibetan Plateau. The PM method produced reliable results when available data were sufficient, and the HS method proved to be a complementary method when variables were insufficient. On the contrary, the MG method always underestimated ET and is, thus, not suitable for alpine meadows. These results provide a basis for estimating ET on the Tibetan Plateau for annual data collection, analysis, and future studies.
Methodological comparison of alpine meadow evapotranspiration on the Tibetan Plateau, China
Chang, Yaping; Wang, Jie; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Liu, Fengjing
2017-01-01
Estimation of evapotranspiration (ET) for alpine meadow areas in the Tibetan Plateau (TP) is essential for water resource management. However, observation data has been limited due to the extreme climates and complex terrain of this region. To address these issues, four representative methods, Penman-Monteith (PM), Priestley-Taylor (PT), Hargreaves-Samani (HS), and Mahringer (MG) methods, were adopted to estimate ET, which were then compared with ET measured using Eddy Covariance (EC) for five alpine meadow sites during the growing seasons from 2010 to 2014. And each site was measured for one growing season during this period. The results demonstrate that the PT method outperformed at all sites with a coefficient of determination (R2) ranging from 0.76 to 0.94 and root mean square error (RMSE) ranging from 0.41 to 0.62 mm d-1. The PM method showed better performance than HS and MG methods, and the HS method produced relatively acceptable results with higher R2 (0.46) and lower RMSE (0.89 mm d-1) compared to MG method with R2 of 0.16 and RMSE of 1.62 mm d-1, while MG underestimated ET at all alpine meadow sites. Therefore, the PT method, being the simpler approach and less data dependent, is recommended to estimate ET for alpine meadow areas in the Tibetan Plateau. The PM method produced reliable results when available data were sufficient, and the HS method proved to be a complementary method when variables were insufficient. On the contrary, the MG method always underestimated ET and is, thus, not suitable for alpine meadows. These results provide a basis for estimating ET on the Tibetan Plateau for annual data collection, analysis, and future studies. PMID:29236754
Development of higher-order modal methods for transient thermal and structural analysis
NASA Technical Reports Server (NTRS)
Camarda, Charles J.; Haftka, Raphael T.
1989-01-01
A force-derivative method which produces higher-order modal solutions to transient problems is evaluated. These higher-order solutions converge to an accurate response using fewer degrees-of-freedom (eigenmodes) than lower-order methods such as the mode-displacement or mode-acceleration methods. Results are presented for non-proportionally damped structural problems as well as thermal problems modeled by finite elements.
Coincident site lattice-matched growth of semiconductors on substrates using compliant buffer layers
Norman, Andrew
2016-08-23
A method of producing semiconductor materials and devices that incorporate the semiconductor materials are provided. In particular, a method is provided of producing a semiconductor material, such as a III-V semiconductor, on a silicon substrate using a compliant buffer layer, and devices such as photovoltaic cells that incorporate the semiconductor materials. The compliant buffer material and semiconductor materials may be deposited using coincident site lattice-matching epitaxy, resulting in a close degree of lattice matching between the substrate material and deposited material for a wide variety of material compositions. The coincident site lattice matching epitaxial process, as well as the use of a ductile buffer material, reduce the internal stresses and associated crystal defects within the deposited semiconductor materials fabricated using the disclosed method. As a result, the semiconductor devices provided herein possess enhanced performance characteristics due to a relatively low density of crystal defects.
Method of producing buried porous silicon-geramanium layers in monocrystalline silicon lattices
NASA Technical Reports Server (NTRS)
Fathauer, Robert W. (Inventor); George, Thomas (Inventor); Jones, Eric W. (Inventor)
1997-01-01
Lattices of alternating layers of monocrystalline silicon and porous silicon-germanium have been produced. These single crystal lattices have been fabricated by epitaxial growth of Si and Si--Ge layers followed by patterning into mesa structures. The mesa structures are stain etched resulting in porosification of the Si--Ge layers with a minor amount of porosification of the monocrystalline Si layers. Thicker Si--Ge layers produced in a similar manner emitted visible light at room temperature.
Preparation and characterization of silk fibroin as a biomaterial with potential for drug delivery
2012-01-01
Background Degummed silk fibroin from Bombyx mori (silkworm) has potential carrier capabilities for drug delivery in humans; however, the processing methods have yet to be comparatively analyzed to determine the differential effects on the silk protein properties, including crystalline structure and activity. Methods In this study, we treated degummed silk with four kinds of calcium-alcohol solutions, and performed secondary structure measurements and enzyme activity test to distinguish the differences between the regenerated fibroins and degummed silk fibroin. Results Gel electrophoresis analysis revealed that Ca(NO3)2-methanol, Ca(NO3)2-ethanol, or CaCl2-methanol treatments produced more lower molecular weights of silk fibroin than CaCl2-ethanol. X-ray diffraction and Fourier-transform infrared spectroscopy showed that CaCl2-ethanol produced a crystalline structure with more silk I (α-form, type II β-turn), while the other treatments produced more silk II (β-form, anti-parallel β-pleated sheet). Solid-State 13C cross polarization and magic angle spinning-nuclear magnetic resonance measurements suggested that regenerated fibroins from CaCl2-ethanol were nearly identical to degummed silk fibroin, while the other treatments produced fibroins with significantly different chemical shifts. Finally, enzyme activity test indicated that silk fibroins from CaCl2-ethanol had higher activity when linked to a known chemotherapeutic drug, L-asparaginase, than the fibroins from other treatments. Conclusions Collectively, these results suggest that the CaCl2-ethanol processing method produces silk fibroin with biomaterial properties that are appropriate for drug delivery. PMID:22676291
NASA Astrophysics Data System (ADS)
Sharudin, R. W.; AbdulBari Ali, S.; Zulkarnain, M.; Shukri, M. A.
2018-05-01
This study reports on the integration of Artificial Neural Network (ANNs) with experimental data in predicting the solubility of carbon dioxide (CO2) blowing agent in SEBS by generating highest possible value for Regression coefficient (R2). Basically, foaming of thermoplastic elastomer with CO2 is highly affected by the CO2 solubility. The ability of ANN in predicting interpolated data of CO2 solubility was investigated by comparing training results via different method of network training. Regards to the final prediction result for CO2 solubility by ANN, the prediction trend (output generate) was corroborated with the experimental results. The obtained result of different method of training showed the trend of output generated by Gradient Descent with Momentum & Adaptive LR (traingdx) required longer training time and required more accurate input to produce better output with final Regression Value of 0.88. However, it goes vice versa with Levenberg-Marquardt (trainlm) technique as it produced better output in quick detention time with final Regression Value of 0.91.
Comparison of Two Methods for Detecting Alternative Splice Variants Using GeneChip® Exon Arrays
Fan, Wenhong; Stirewalt, Derek L.; Radich, Jerald P.; Zhao, Lueping
2011-01-01
The Affymetrix GeneChip Exon Array can be used to detect alternative splice variants. Microarray Detection of Alternative Splicing (MIDAS) and Partek® Genomics Suite (Partek® GS) are among the most popular analytical methods used to analyze exon array data. While both methods utilize statistical significance for testing, MIDAS and Partek® GS could produce somewhat different results due to different underlying assumptions. Comparing MIDAS and Partek® GS is quite difficult due to their substantially different mathematical formulations and assumptions regarding alternative splice variants. For meaningful comparison, we have used the previously published generalized probe model (GPM) which encompasses both MIDAS and Partek® GS under different assumptions. We analyzed a colon cancer exon array data set using MIDAS, Partek® GS and GPM. MIDAS and Partek® GS produced quite different sets of genes that are considered to have alternative splice variants. Further, we found that GPM produced results similar to MIDAS as well as to Partek® GS under their respective assumptions. Within the GPM, we show how discoveries relating to alternative variants can be quite different due to different assumptions. MIDAS focuses on relative changes in expression values across different exons within genes and tends to be robust but less efficient. Partek® GS, however, uses absolute expression values of individual exons within genes and tends to be more efficient but more sensitive to the presence of outliers. From our observations, we conclude that MIDAS and Partek® GS produce complementary results, and discoveries from both analyses should be considered. PMID:23675234
Characterization of Tubing from Advanced ODS alloy (FCRD-NFA1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maloy, Stuart Andrew; Aydogan, Eda; Anderoglu, Osman
2016-09-20
Fabrication methods are being developed and tested for producing fuel clad tubing of the advanced ODS 14YWT and FCRD-NFA1 ferritic alloys. Three fabrication methods were based on plastically deforming a machined thick-wall tube sample of the ODS alloys by pilgering, hydrostatic extrusion or drawing to decrease the outer diameter and wall thickness and increase the length of the final tube. The fourth fabrication method consisted of the additive manufacturing approach involving solid-state spray deposition (SSSD) of ball milled and annealed powder of 14YWT for producing thin-wall tubes. Of the four fabrication methods, two methods were successful at producing tubing formore » further characterization: production of tubing by high-velocity oxy-fuel spray forming and production of tubing using high-temperature hydrostatic extrusion. The characterization described shows through neutron diffraction the texture produced during extrusion while maintaining the beneficial oxide dispersion. In this research, the parameters for innovative thermal spray deposition and hot extrusion processing methods have been developed to produce the final nanostructured ferritic alloy (NFA) tubes having approximately 0.5 mm wall thickness. Effect of different processing routes on texture and grain boundary characteristics has been investigated. It was found that hydrostatic extrusion results in combination of plane strain and shear deformations which generate rolling textures of α- and γ-fibers on {001}<110> and {111}<110> together with a shear texture of ζ-fiber on {011}<211> and {011}<011>. On the other hand, multi-step plane strain deformation in cross directions leads to a strong rolling textures of θ- and ε-fiber on {001}<110> together with weak γ-fiber on {111}<112>. Even though the amount of the equivalent strain is similar, shear deformation leads to much lower texture indexes compared to the plane strain deformations. Moreover, while 50% of hot rolling brings about a large number of high-angle grain boundaries (HAB), 44% of shear deformation results in large amount of low-angle boundaries (LAB) showing the incomplete recrystallization.« less
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Vegetative propagation of Cecropia obtusifolia (Cecropiaceae).
LaPierre, L M
2001-01-01
Cecropia is a relatively well-known and well-studied genus in the Neotropics. Methods for the successful propagation of C. obtusifolia Bertoloni, 1840 from cuttings and air layering are described, and the results of an experiment to test the effect of two auxins, naphthalene acetic acid (NAA) and indole butyric acid (IBA), on adventitious root production in cuttings are presented. In general, C. obtusifolia cuttings respond well to adventitious root production (58.3% of cuttings survived to root), but air layering was the better method (93% of cuttings survived to root). The concentration of auxins used resulted in an overall significantly lower quality of roots produced compared with cuttings without auxin treatment. Future experiments using Cecropia could benefit from the use of isogenic plants produced by vegetative propagation.
Visualizing geoelectric - Hydrogeological parameters of Fadak farm at Najaf Ashraf by using 2D spa
NASA Astrophysics Data System (ADS)
Al-Khafaji, Wadhah Mahmood Shakir; Al-Dabbagh, Hayder Abdul Zahra
2016-12-01
A geophysical survey achieved to produce an electrical resistivity grid data of 23 Schlumberger Vertical Electrical Sounding (VES) points distributed across the area of Fadak farm at Najaf Ashraf province in Iraq. The current research deals with the application of six interpolation methods used to delineate subsurface groundwater aquifer properties. One example of such features is the delineation of high and low groundwater hydraulic conductivity (K). Such methods could be useful in predicting high (K) zones and predicting groundwater flowing directions within the studied aquifer. Interpolation methods were helpful in predicting some aquifer hydrogeological and structural characteristics. The results produced some important conclusions for any future groundwater investment.
Critical Evaluation of Soil Pore Water Extraction Methods on a Natural Soil
NASA Astrophysics Data System (ADS)
Orlowski, Natalie; Pratt, Dyan; Breuer, Lutz; McDonnell, Jeffrey
2017-04-01
Soil pore water extraction is an important component in ecohydrological studies for the measurement of δ2H and δ18O. The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of commonly applied lab-based soil water extraction techniques on a natural soil: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and two types of cryogenic extraction systems. We applied these extraction methods to a natural summer-dry (gravimetric water contents ranging from 8% to 15%) glacio-lacustrine, moderately fine textured clayey soil; excavated in 10 cm sampling increments to a depth of 1 meter. Isotope results were analyzed via OA-ICOS and compared for each extraction technique that produced liquid water. From our previous intercomparison study among the same extraction techniques but with standard soils, we discovered that extraction methods are not comparable. We therefore tested the null hypothesis that all extraction techniques would be able to replicate the natural evaporation front in a comparable manner occurring in a summer-dry soil. Our results showed that the extraction technique utilized had a significant effect on the soil water isotopic composition. High pressure mechanical squeezing and vapor equilibration techniques produced similar results with similarly sloped evaporation lines. Due to the nature of soil properties and dryness, centrifugation was unsuccessful in obtaining pore water for isotopic analysis. Cryogenic extraction on both tested techniques produced similar results to each other on a similar sloping evaporation line, but dissimilar with depth.
Performance of CarbaNP and CIM tests in OXA-48 carbapenemase-producing Enterobacteriaceae.
Yıldız, Serap Süzük; Kaşkatepe, Banu; Avcıküçük, Havva; Öztürk, Şükran
2017-03-01
This study applied two phenotypic tests, namely "Carbapenemase Nordmann-Poirel" (CarbaNP) test and "Carbapenem Inactivation Method" (CIM), against the isolates carrying the carbapenem resistance genes. The study included 83 carbapenem-resistant Enterobacteriaceae isolates producing oxacillinase-48 (OXA-48) and 30 carbapenem-sensitive Enterobacteriaceae isolates. Out of the total isolates studied, 77 isolates (92.77%) were identified as Klebsiella pneumoniae and six isolates (7.23%) were identified as Escherichia coli by Matrix Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry. Polymerase chain reaction (PCR) method used to detect resistance genes found that 74 isolates (89.16%) produced OXA-48 carbapenemase, whereas nine isolates (10.84%) produced both OXA-48 and New Delhi metallo-beta-lactamase-1 (NDM-1). The isolates producing both OXA-48 and NDM-1 were found to be positive by both phenotypic tests. Among isolates carrying only bla OXA-48 gene alone, nine isolates (13.04%) for CarbaNP test and two isolates for CIM test (2.90%) displayed false negative results, respectively. The sensitivity of CarbaNP and CIM tests was found to be 89.16% and 97.59%, respectively, whereas the specificity was determined to be 100% for both tests. These findings suggest that CarbaNP and CIM tests are useful tools to identify the carbapenemase producers. Molecular methods like PCR are recommended to verify false negative tests predicted to have OXA-48 activity.
Prospect of stem cell conditioned medium in regenerative medicine.
Pawitan, Jeanne Adiwinata
2014-01-01
Stem cell-derived conditioned medium has a promising prospect to be produced as pharmaceuticals for regenerative medicine. To investigate various methods to obtain stem cell-derived conditioned medium (CM) to get an insight into their prospect of application in various diseases. Systematic review using keywords "stem cell" and "conditioned medium" or "secretome" and "therapy." Data concerning treated conditions/diseases, type of cell that was cultured, medium and supplements to culture the cells, culture condition, CM processing, growth factors and other secretions that were analyzed, method of application, and outcome were noted, grouped, tabulated, and analyzed. Most of CM using studies showed good results. However, the various CM, even when they were derived from the same kind of cells, were produced by different condition, that is, from different passage, culture medium, and culture condition. The growth factor yields of the various types of cells were available in some studies, and the cell number that was needed to produce CM for one application could be computed. Various stem cell-derived conditioned media were tested on various diseases and mostly showed good results. However, standardized methods of production and validations of their use need to be conducted.
Dynamically Evolving Sectors for Convective Weather Impact
NASA Technical Reports Server (NTRS)
Drew, Michael C.
2010-01-01
A new strategy for altering existing sector boundaries in response to blocking convective weather is presented. This method seeks to improve the reduced capacity of sectors directly affected by weather by moving boundaries in a direction that offers the greatest capacity improvement. The boundary deformations are shared by neighboring sectors within the region in a manner that preserves their shapes and sizes as much as possible. This reduces the controller workload involved with learning new sector designs. The algorithm that produces the altered sectors is based on a force-deflection mesh model that needs only nominal traffic patterns and the shape of the blocking weather for input. It does not require weather-affected traffic patterns that would have to be predicted by simulation. When compared to an existing optimal sector design method, the sectors produced by the new algorithm are more similar to the original sector shapes, resulting in sectors that may be more suitable for operational use because the change is not as drastic. Also, preliminary results show that this method produces sectors that can equitably distribute the workload of rerouted weather-affected traffic throughout the region where inclement weather is present. This is demonstrated by sector aircraft count distributions of simulated traffic in weather-affected regions.
Nabok, Alexei; Davis, Frank; Higson, Séamus P J
2016-01-01
Summary In this paper we detail a novel semi-automated method for the production of graphene by sonochemical exfoliation of graphite in the presence of ionic surfactants, e.g., sodium dodecyl sulfate (SDS) and cetyltrimethylammonium bromide (CTAB). The formation of individual graphene flakes was confirmed by Raman spectroscopy, while the interaction of graphene with surfactants was proven by NMR spectroscopy. The resulting graphene–surfactant composite material formed a stable suspension in water and some organic solvents, such as chloroform. Graphene thin films were then produced using Langmuir–Blodgett (LB) or electrostatic layer-by-layer (LbL) deposition techniques. The composition and morphology of the films produced was studied with SEM/EDX and AFM. The best results in terms of adhesion and surface coverage were achieved using LbL deposition of graphene(−)SDS alternated with polyethyleneimine (PEI). The optical study of graphene thin films deposited on different substrates was carried out using UV–vis absorption spectroscopy and spectroscopic ellipsometry. A particular focus was on studying graphene layers deposited on gold-coated glass using a method of total internal reflection ellipsometry (TIRE) which revealed the enhancement of the surface plasmon resonance in thin gold films by depositing graphene layers. PMID:26977378
Batchwise dyeing of bamboo cellulose fabric with reactive dye using ultrasonic energy.
Larik, Safdar Ali; Khatri, Awais; Ali, Shamshad; Kim, Seong Hun
2015-05-01
Bamboo is a regenerated cellulose fiber usually dyed with reactive dyes. This paper presents results of the batchwise dyeing of bamboo fabric with reactive dyes by ultrasonic (US) and conventional (CN) dyeing methods. The study was focused at comparing the two methods for dyeing results, chemicals, temperature and time, and effluent quality. Two widely used dyes, CI Reactive Black 5 (bis-sulphatoethylsulphone) and CI Reactive Red 147 (difluorochloropyrimidine) were used in the study. The US dyeing method produced around 5-6% higher color yield (K/S) in comparison to the CN dyeing method. A significant savings in terms of fixation temperature (10°C) and time (15 min), and amounts of salt (10 g/L) and alkali (0.5-1% on mass of fiber) was realized. Moreover, the dyeing effluent showed considerable reductions in the total dissolved solids content (minimum around 29%) and in the chemical oxygen demand (minimum around 13%) for the US dyebath in comparison to the CN dyebath. The analysis of colorfastness tests demonstrated similar results by US and CN dyeing methods. A microscopic examination on the field emission scanning electron microscope revealed that the US energy did not alter the surface morphology of the bamboo fibers. It was concluded that the US dyeing of bamboo fabric produces better dyeing results and is a more economical and environmentally sustainable method as compared to CN dyeing method. Copyright © 2014 Elsevier B.V. All rights reserved.
Fermentation and chemical treatment of pulp and paper mill sludge
Lee, Yoon Y; Wang, Wei; Kang, Li
2014-12-02
A method of chemically treating partially de-ashed pulp and/or paper mill sludge to obtain products of value comprising taking a sample of primary sludge from a Kraft paper mill process, partially de-ashing the primary sludge by physical means, and further treating the primary sludge to obtain the products of value, including further treating the resulting sludge and using the resulting sludge as a substrate to produce cellulase in an efficient manner using the resulting sludge as the only carbon source and mixtures of inorganic salts as the primary nitrogen source, and including further treating the resulting sludge and using the resulting sludge to produce ethanol.
Balance Contrast Enhancement using piecewise linear stretching
NASA Astrophysics Data System (ADS)
Rahavan, R. V.; Govil, R. C.
1993-04-01
Balance Contrast Enhancement is one of the techniques employed to produce color composites with increased color contrast. It equalizes the three images used for color composition in range and mean. This results in a color composite with large variation in hue. Here, it is shown that piecewise linear stretching can be used for performing the Balance Contrast Enhancement. In comparison with the Balance Contrast Enhancement Technique using parabolic segment as transfer function (BCETP), the method presented here is algorithmically simple, constraint-free and produces comparable results.
METHOD FOR PRODUCING THORIUM TETRACHLORIDE
Mason, E.A.; Cobb, C.M.
1960-03-15
A process for producing thorium tetrachloride from thorium concentrate comprises reacting thorium concentrates with a carbonaceous reducing agent in excess of 0.05 part by weight per part of thoriferous concentrate at a temperature in excess of 1300 deg C, cooling and comminuting the mass, chlorinating the resulting comminuting mass by suspending in a gaseous chlorinating agent in a fluidized reactor at a temperatare maintained between about l85 deg C and 770 deg C, and removing the resulting solid ThCl/sub 4/ from the reaction zone.
Process for preparing energetic materials
Simpson, Randall L [Livermore, CA; Lee, Ronald S [Livermore, CA; Tillotson, Thomas M [Tracy, CA; Hrubesh, Lawrence W [Pleasanton, CA; Swansiger, Rosalind W [Livermore, CA; Fox, Glenn A [Livermore, CA
2011-12-13
Sol-gel chemistry is used for the preparation of energetic materials (explosives, propellants and pyrotechnics) with improved homogeneity, and/or which can be cast to near-net shape, and/or made into precision molding powders. The sol-gel method is a synthetic chemical process where reactive monomers are mixed into a solution, polymerization occurs leading to a highly cross-linked three dimensional solid network resulting in a gel. The energetic materials can be incorporated during the formation of the solution or during the gel stage of the process. The composition, pore, and primary particle sizes, gel time, surface areas, and density may be tailored and controlled by the solution chemistry. The gel is then dried using supercritical extraction to produce a highly porous low density aerogel or by controlled slow evaporation to produce a xerogel. Applying stress during the extraction phase can result in high density materials. Thus, the sol-gel method can be used for precision detonator explosive manufacturing as well as producing precision explosives, propellants, and pyrotechnics, along with high power composite energetic materials.
Sol-Gel Manufactured Energetic Materials
Simpson, Randall L.; Lee, Ronald S.; Tillotson, Thomas M.; Hrubesh, Lawrence W.; Swansiger, Rosalind W.; Fox, Glenn A.
2005-05-17
Sol-gel chemistry is used for the preparation of energetic materials (explosives, propellants and pyrotechnics) with improved homogeneity, and/or which can be cast to near-net shape, and/or made into precision molding powders. The sol-gel method is a synthetic chemical process where reactive monomers are mixed into a solution, polymerization occurs leading to a highly cross-linked three dimensional solid network resulting in a gel. The energetic materials can be incorporated during the formation of the solution or during the gel stage of the process. The composition, pore, and primary particle sizes, gel time, surface areas, and density may be tailored and controlled by the solution chemistry. The gel is then dried using supercritical extraction to produce a highly porous low density aerogel or by controlled slow evaporation to produce a xerogel. Applying stress during the extraction phase can result in high density materials. Thus, the sol-gel method can be used for precision detonator explosive manufacturing as well as producing precision explosives, propellants, and pyrotechnics, along with high power composite energetic materials.
Sol-gel manufactured energetic materials
Simpson, Randall L.; Lee, Ronald S.; Tillotson, Thomas M.; Hrubesh, Lawrence W.; Swansiger, Rosalind W.; Fox, Glenn A.
2003-12-23
Sol-gel chemistry is used for the preparation of energetic materials (explosives, propellants and pyrotechnics) with improved homogeneity, and/or which can be cast to near-net shape, and/or made into precision molding powders. The sol-gel method is a synthetic chemical process where reactive monomers are mixed into a solution, polymerization occurs leading to a highly cross-linked three dimensional solid network resulting in a gel. The energetic materials can be incorporated during the formation of the solution or during the gel stage of the process. The composition, pore, and primary particle sizes, gel time, surface areas, and density may be tailored and controlled by the solution chemistry. The gel is then dried using supercritical extraction to produce a highly porous low density aerogel or by controlled slow evaporation to produce a xerogel. Applying stress during the extraction phase can result in high density materials. Thus, the sol-gel method can be used for precision detonator explosive manufacturing as well as producing precision explosives, propellants, and pyrotechnics, along with high power composite energetic materials.
Tang, P; Brouwers, H J H
2017-04-01
The cold-bonding pelletizing technique is applied in this study as an integrated method to recycle municipal solid waste incineration (MSWI) bottom ash fines (BAF, 0-2mm) and several other industrial powder wastes. Artificial lightweight aggregates are produced successfully based on the combination of these solid wastes, and the properties of these artificial aggregates are investigated and then compared with others' results reported in literature. Additionally, methods for improving the aggregate properties are suggested, and the corresponding experimental results show that increasing the BAF amount, higher binder content and addition of polypropylene fibres can improve the pellet properties (bulk density, crushing resistance, etc.). The mechanisms regarding to the improvement of the pellet properties are discussed. Furthermore, the leaching behaviours of contaminants from the produced aggregates are investigated and compared with Dutch environmental legislation. The application of these produced artificial lightweight aggregates are proposed according to their properties. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reverberation Modelling Using a Parabolic Equation Method
2012-10-01
the limits of their applicability. Results: Transmission loss estimates produced by the PECan parabolic equation acoustic model were used in...environments is possible when used in concert with a parabolic equation passive acoustic model . Future plans: The authors of this report recommend further...technique using other types of acoustic models should be undertaken. Furthermore, as the current method when applied as-is results in estimates that reflect
The Effects of Three Methods of Observation on Couples in Interactional Research.
ERIC Educational Resources Information Center
Carpenter, Linda J.; Merkel, William T.
1988-01-01
Assessed the effects of three different methods of observation of couples (one-way mirror, audio recording, and video recording) on 30 volunteer, nonclinical married couples. Results suggest that types of observation do not produce significantly different effects on nonclinical couples. (Author/ABL)
Sohail, Muhammad; Latif, Zakia
2016-01-01
Background: Keeping in mind the commercial application of polygalacturonase (PG) in juice and beverages industry, bacterial strains were isolated from rotten fruits and vegetables to screen for competent producers of PG. Objectives: In this study, the plate method was used for preliminary screening of polygalacturonase-producing bacteria, while the Dinitrosalicylic Acid (DNS) method was used for quantifications of PG. Materials and Methods: Biochemically-identified polygalacturonase-producing Bacillus and Pseudomonas species were further characterized by molecular markers. The genetic diversity among these selected strains was analyzed by investigating microsatellite distribution in their genome. Out of 110 strains, 17 competent strains of Bacillus and eight strains of Pseudomonas were selected, identified and confirmed biochemically. Selected strains were characterized by 16S rRNA sequencing and data was submitted to the national center for biotechnology information (NCBI) website for accession numbers. Results: Among the Bacillus, Bacillus vallismortis (JQ990307) isolated from mango was the most competent producer of PG; producing up to 4.4 U/µL. Amongst Pseudomonas, Pseudomonas aeruginosa (JQ990314) isolated from oranges was the most competent PG producer equivalent to B. vallismortis (JQ990307). To determine genetic diversity of different strains of Pseudomonas and Bacillus varying in PG production, fingerprinting was done on the basis of Simple Sequence Repeats (SSR) or microsatellites. The data was analyzed and a phylogenetic tree was constructed using the Minitab 3 software for comparison of bacterial isolates producing different concentrations of PG. Fingerprinting showed that presence or absence of certain microsatellites correlated with the ability of PG production. Conclusions: Bacteria from biological waste were competent producers of PG and must be used on an industrial scale to cope with the demand of PG in the food industry. PMID:27099686
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
2011-01-01
was greater than 1 or less than 0. The second was a Normalized Difference Vegetation Index ( NDVI ) band ratio between a near-infrared band (738 nm) and...separation methods worked well, neither produced perfect results. Ultimately, the NDVI method was chosen because it could also be used to further...In addition, it is a broadly tested method often used to identify and measure vegetation (Tucker, 1979). The NDVI result was also used to separate
USDA-ARS?s Scientific Manuscript database
Non-O157 Shiga toxin-producing Escherichia coli (STEC) strains such as O26, O45, O103, O111, O121 and O145 are recognized as serious outbreak to cause human illness due to their toxicity. A conventional microbiological method for cell counting is laborious and needs long time for the results. Since ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Raghu N.
An electroplating solution and method for producing an electroplating solution containing a gallium salt, an ionic compound and a solvent that results in a gallium thin film that can be deposited on a substrate.
A new powder production route for transparent spinel windows: powder synthesis and window properties
NASA Astrophysics Data System (ADS)
Cook, Ronald; Kochis, Michael; Reimanis, Ivar; Kleebe, Hans-Joachim
2005-05-01
Spinel powders for the production of transparent polycrystalline ceramic windows have been produced using a number of traditional ceramic and sol-gel methods. We have demonstrated that magnesium aluminate spinel powders produced from the reaction of organo-magnesium compounds with surface modified boehmite precursors can be used to produce high quality transparent spinel parts. The new powder production method allows fine control over the starting particle size, size distribution, purity and stoichiometry. The new process involves formation of a boehmite sol-gel from the hydrolysis of aluminum alkoxides followed by surface modification of the boehmite nanoparticles using carboxylic acids. The resulting surface modified boehmite nanoparticles can then be metal exchanged at room temperature with magnesium acetylacetonate to make a precursor powder that is readily transformed into pure phase spinel.
Seikh, Asiful H; Sherif, El-Sayed M; Khan Mohammed, Sohail M A; Baig, Muneer; Alam, Mohammad Asif; Alharthi, Nabeel
2018-01-01
The aim of this study is to find out the microstructure, hardness, and corrosion resistance of Pb-5%Sb spine alloy. The alloy has been produced by high pressure die casting (HPDC), medium pressure die casting (AS) and low pressure die casting (GS) methods, respectively. The microstructure was characterized by using optical microscopy and scanning electron microscopy (SEM). The hardness was also reported. The corrosion resistance of the spines in 0.5M H2SO4 solution has been analyzed by measuring the weight loss, impedance spectroscopy and the potentiodynamic polarization techniques. It has been found that the spine produced by HPDC has defect-free fine grain structure resulting improvement in hardness and excellent corrosion resistance.
Baig, Muneer; Alam, Mohammad Asif; Alharthi, Nabeel
2018-01-01
The aim of this study is to find out the microstructure, hardness, and corrosion resistance of Pb-5%Sb spine alloy. The alloy has been produced by high pressure die casting (HPDC), medium pressure die casting (AS) and low pressure die casting (GS) methods, respectively. The microstructure was characterized by using optical microscopy and scanning electron microscopy (SEM). The hardness was also reported. The corrosion resistance of the spines in 0.5M H2SO4 solution has been analyzed by measuring the weight loss, impedance spectroscopy and the potentiodynamic polarization techniques. It has been found that the spine produced by HPDC has defect-free fine grain structure resulting improvement in hardness and excellent corrosion resistance. PMID:29668709
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Hashimoto, Haruo; Mizushima, Tomoko; Chijiwa, Tsuyoshi; Nakamura, Masato; Suemizu, Hiroshi
2017-06-15
The purpose of this study was to establish an efficient method for the preparation of an adeno-associated viral (AAV), serotype DJ/8, carrying the GFP gene (AAV-DJ/8-GFP). We compared the yields of AAV-DJ/8 vector, which were produced by three different combination methods, consisting of two plasmid DNA transfection methods (lipofectamine and calcium phosphate co-precipitation; CaPi) and two virus DNA purification methods (iodixanol and cesium chloride; CsCl). The results showed that the highest yield of AAV-DJ/8-GFP vector was accomplished with the combination method of lipofectamine transfection and iodixanol purification. The viral protein expression levels and the transduction efficacy in HEK293 and CHO cells were not different among four different combination methods for AAV-DJ/8-GFP vectors. We confirmed that the AAV-DJ/8-GFP vector could transduce to human and murine hepatocyte-derived cell lines. These results show that AAV-DJ/8-GFP, purified by the combination of lipofectamine and iodixanol, produces an efficient yield without altering the characteristics of protein expression and AAV gene transduction. Copyright © 2017 Elsevier B.V. All rights reserved.
Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B
2009-11-01
The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.
MODELS AND MODELING METHODS FOR ASSESSING HUMAN EXPOSURE AND DOSE TO TOXIC CHEMICALS AND POLLUTANTS
This project aims to strengthen the general scientific foundation of EPA's exposure and risk assessment, management, and policy processes by developing state-of-the-art exposure to dose mathematical models and solution methods. The results of this research will be to produce a mo...
Because the current approved cultural methods for monitoring indicator bacteria in recreational water require 24 hours to produce results, the public may be exposed to potentially contaminated water before the water has been identified as hazardous. This project was initiated to...
EVALUATION OF A TEST METHOD FOR MEASURING INDOOR AIR EMISSIONS FROM DRY-PROCESS PHOTOCOPIERS
A large chamber test method for measuring indoor air emissions from office equipment was developed, evaluated, and revised based on the initial testing of four dry-process photocopiers. Because all chambers may not necessarily produce similar results (e.g., due to differences in ...
A new method for testing the scale-factor performance of fiber optical gyroscope
NASA Astrophysics Data System (ADS)
Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin
2015-10-01
Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.
MEM application to IRAS CPC images
NASA Technical Reports Server (NTRS)
Marston, A. P.
1994-01-01
A method for applying the Maximum Entropy Method (MEM) to Chopped Photometric Channel (CPC) IRAS additional observations is illustrated. The original CPC data suffered from problems with repeatability which MEM is able to cope with by use of a noise image, produced from the results of separate data scans of objects. The process produces images of small areas of sky with circular Gaussian beams of approximately 30 in. full width half maximum resolution at 50 and 100 microns. Comparison is made to previous reconstructions made in the far-infrared as well as morphologies of objects at other wavelengths. Some projects with this dataset are discussed.
Metals purification by improved vacuum arc remelting
Zanner, Frank J.; Williamson, Rodney L.; Smith, Mark F.
1994-12-13
The invention relates to improved apparatuses and methods for remelting metal alloys in furnaces, particularly consumable electrode vacuum arc furnaces. Excited reactive gas is injected into a stationary furnace arc zone, thus accelerating the reduction reactions which purify the metal being melted. Additionally, a cooled condensation surface is disposed within the furnace to reduce the partial pressure of water in the furnace, which also fosters the reduction reactions which result in a purer produced ingot. Methods and means are provided for maintaining the stationary arc zone, thereby reducing the opportunity for contaminants evaporated from the arc zone to be reintroduced into the produced ingot.
Methods of pretreating comminuted cellulosic material with carbonate-containing solutions
Francis, Raymond
2012-11-06
Methods of pretreating comminuted cellulosic material with an acidic solution and then a carbonate-containing solution to produce a pretreated cellulosic material are provided. The pretreated material may then be further treated in a pulping process, for example, a soda-anthraquinone pulping process, to produce a cellulose pulp. The pretreatment solutions may be extracted from the pretreated cellulose material and selectively re-used, for example, with acid or alkali addition, for the pretreatment solutions. The resulting cellulose pulp is characterized by having reduced lignin content and increased yield compared to prior art treatment processes.
Titanium aluminide intermetallic alloys with improved wear resistance
Qu, Jun; Lin, Hua-Tay; Blau, Peter J.; Sikka, Vinod K.
2014-07-08
The invention is directed to a method for producing a titanium aluminide intermetallic alloy composition having an improved wear resistance, the method comprising heating a titanium aluminide intermetallic alloy material in an oxygen-containing environment at a temperature and for a time sufficient to produce a top oxide layer and underlying oxygen-diffused layer, followed by removal of the top oxide layer such that the oxygen-diffused layer is exposed. The invention is also directed to the resulting oxygen-diffused titanium aluminide intermetallic alloy, as well as mechanical components or devices containing the improved alloy composition.
NASA Astrophysics Data System (ADS)
Burton-Johnson, Alex; Halpin, Jacqueline; Whittaker, Joanne; Watson, Sally
2017-04-01
Seismic and magnetic geophysical methods have both been employed to produce estimates of heat flux beneath the Antarctic ice sheet. However, both methods use a homogeneous upper crustal model despite the variable concentration of heat producing elements within its composite lithologies. Using geological and geochemical datasets from the Antarctic Peninsula we have developed a new methodology for incorporating upper crustal heat production in heat flux models and have shown the greater variability this introduces in to estimates of crustal heat flux, with implications for glaciological modelling.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
NASA Astrophysics Data System (ADS)
Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel
2017-07-01
Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.
NASA Astrophysics Data System (ADS)
Tingberg, Anders Martin
Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.
Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo
2017-01-01
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Prevalence and Characterization of High Histamine-Producing Bacteria in Gulf of Mexico Fish Species.
Bjornsdottir-Butler, Kristin; Bowers, John C; Benner, Ronald A
2015-07-01
Recent developments in detection and enumeration of histamine-producing bacteria (HPB) have created powerful molecular-based tools to better understand the presence of spoilage bacteria and conditions, resulting in increased risk of scombrotoxin fish poisoning. We examined 235 scombrotoxin-forming fish from the Gulf of Mexico for the presence of high HPB. Photobacterium damselae subsp. damselae was the most prevalent HPB (49%), followed by Morganella morganii (14%), Enterobacter aerogenes (4%), and Raoultella planticola (3%). The growth characteristics and histamine production capabilities of the two most prevalent HPB were further examined. M. morganii and P. damselae had optimum growth at 35°C and 30 to 35°C and 0 to 2% and 1 to 3% NaCl, respectively. P. damselae produced significantly (P < 0.001) higher histamine than M. morganii in inoculated mahimahi and Spanish mackerel incubated at 30°C for 24 h, but histamine production was not significantly different between the two HPB in inoculated tuna, possibly due to differences in muscle composition and salt content. Results in this study showed that P. damselae was the most prevalent high HPB in Gulf of Mexico fish. In addition, previously reported results using the traditional Niven's method may underreport the prevalence of P. damselae. Molecular-based methods should be used in addition to culture-based methods to enhance detection and enumeration of HPB.
NASA Astrophysics Data System (ADS)
Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery
2017-06-01
Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.
Microorganism Identification Based On MALDI-TOF-MS Fingerprints
NASA Astrophysics Data System (ADS)
Elssner, Thomas; Kostrzewa, Markus; Maier, Thomas; Kruppa, Gary
Advances in MALDI-TOF mass spectrometry have enabled the development of a rapid, accurate and specific method for the identification of bacteria directly from colonies picked from culture plates, which we have named the MALDI Biotyper. The picked colonies are placed on a target plate, a drop of matrix solution is added, and a pattern of protein molecular weights and intensities, "the protein fingerprint" of the bacteria, is produced by the MALDI-TOF mass spectrometer. The obtained protein mass fingerprint representing a molecular signature of the microorganism is then matched against a database containing a library of previously measured protein mass fingerprints, and scores for the match to every library entry are produced. An ID is obtained if a score is returned over a pre-set threshold. The sensitivity of the techniques is such that only approximately 104 bacterial cells are needed, meaning that an overnight culture is sufficient, and the results are obtained in minutes after culture. The improvement in time to result over biochemical methods, and the capability to perform a non-targeted identification of bacteria and spores, potentially makes this method suitable for use in the detect-to-treat timeframe in a bioterrorism event. In the case of white-powder samples, the infectious spore is present in sufficient quantity in the powder so that the MALDI Biotyper result can be obtained directly from the white powder, without the need for culture. While spores produce very different patterns from the vegetative colonies of the corresponding bacteria, this problem is overcome by simply including protein fingerprints of the spores in the library. Results on spores can be returned within minutes, making the method suitable for use in the "detect-to-protect" timeframe.
NASA Astrophysics Data System (ADS)
Bellos, Vasilis; Tsakiris, George
2016-09-01
The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.
Building a composite score of general practitioners' intrinsic motivation: a comparison of methods.
Sicsic, Jonathan; Le Vaillant, Marc; Franc, Carine
2014-04-01
Pay-for-performance programmes have been widely implemented in primary care, but few studies have investigated their potential adverse effects on the intrinsic motivation of general practitioners (GPs) even though intrinsic motivation may be a key determinant of quality in health care. Our aim was to compare methods for developing a composite score of GPs' intrinsic motivation and to select one that is most consistent with self-reported data. A postal survey. French GPs practicing in private practice. Using a set of variables selected to characterize the dimensions of intrinsic motivation, three alternative composite scores were calculated based on a multiple correspondence analysis (MCA), a confirmatory factor analysis (CFA) and a two-parameter logistic model (2-PLM). Weighted kappa coefficients were used to evaluate variation in GPs' ranks according to each method. The three methods produced similar results on both the estimation of the indicators' weights and the order of GP rank lists. All weighted kappa coefficients were >0.80. The CFA and 2-PLM produced the most similar results. There was little difference regarding the three methods' results, validating our measure of GPs' intrinsic motivation. The 2-PLM appeared theoretically and empirically more robust for establishing the intrinsic motivation score. Code JEL C38, C43, I18.
Xanthopoulou, Panagiota; Valakos, Efstratios; Youlatos, Dionisios; Nikita, Efthymia
2018-05-01
The present study tests the accuracy of commonly adopted ageing methods based on the morphology of the pubic symphysis, auricular surface and cranial sutures. These methods are examined both in their traditional form as well as in the context of transition analysis using the ADBOU software in a modern Greek documented collection consisting of 140 individuals who lived mainly in the second half of the twentieth century and come from cemeteries in the area of Athens. The auricular surface overall produced the most accurate age estimates in our material, with different methods based on this anatomical area showing varying degrees of success for different age groups. The pubic symphysis produced accurate results primarily for young adults and the same applied to cranial sutures but the latter appeared completely inappropriate for older individuals. The use of transition analysis through the ADBOU software provided less accurate results than the corresponding traditional ageing methods in our sample. Our results are in agreement with those obtained from validation studies based on material from across the world, but certain differences identified with other studies on Greek material highlight the importance of taking into account intra- and inter-population variability in age estimation. Copyright © 2018 Elsevier B.V. All rights reserved.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
2017-01-01
Scheme III (piecewise linear) and V (piecewise parabolic) of Van Leer are shown to yield identical solutions provided the initial conditions are chosen in an appropriate manner. This result is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The result also shows a key connection between the approaches of discontinuous and continuous representations.
NASA Technical Reports Server (NTRS)
Phillips, Edward P.
1989-01-01
An experimental Round Robin on the measurement of the opening load in fatigue crack growth tests was conducted on Crack Closure Measurement and Analysis. The Round Robin evaluated the current level of consistency of opening load measurements among laboratories and to identify causes for observed inconsistency. Eleven laboratories participated in the testing of compact and middle-crack specimens. Opening-load measurements were made for crack growth at two stress-intensity factor levels, three crack lengths, and following an overload. All opening-load measurements were based on the analysis of specimen compliance data. When all of the results reported (from all participants, all measurement methods, and all data analysis methods) for a given test condition were pooled, the range of opening loads was very large--typically spanning the lower half of the fatigue loading cycle. Part of the large scatter in the reported opening-load results was ascribed to consistent differences in results produced by the various methods used to measure specimen compliance and to evaluate the opening load from the compliance data. Another significant portion of the scatter was ascribed to lab-to-lab differences in producing the compliance data when using nominally the same method of measurement.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
Current advances on polynomial resultant formulations
NASA Astrophysics Data System (ADS)
Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar
2017-08-01
Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.
Do enteric neurons make hypocretin?
Baumann, Christian R; Clark, Erika L; Pedersen, Nigel P; Hecht, Jonathan L; Scammell, Thomas E
2008-04-10
Hypocretins (orexins) are wake-promoting neuropeptides produced by hypothalamic neurons. These hypocretin-producing cells are lost in people with narcolepsy, possibly due to an autoimmune attack. Prior studies described hypocretin neurons in the enteric nervous system, and these cells could be an additional target of an autoimmune process. We sought to determine whether enteric hypocretin neurons are lost in narcoleptic subjects. Even though we tried several methods (including whole mounts, sectioned tissue, pre-treatment of mice with colchicine, and the use of various primary antisera), we could not identify hypocretin-producing cells in enteric nervous tissue collected from mice or normal human subjects. These results raise doubts about whether enteric neurons produce hypocretin.
Gambarini, Gianluca; Grande, Nicola Maria; Plotino, Gianluca; Somma, Francesco; Garala, Manish; De Luca, Massimo; Testarelli, Luca
2008-08-01
The aim of the present study was to investigate whether cyclic fatigue resistance is increased for nickel-titanium instruments manufactured by using new processes. This was evaluated by comparing instruments produced by using the twisted method (TF; SybronEndo, Orange, CA) and those using the M-wire alloy (GTX; Dentsply Tulsa-Dental Specialties, Tulsa, OK) with instruments produced by a traditional NiTi grinding process (K3, SybronEndo). Tests were performed with a specific cyclic fatigue device that evaluated cycles to failure of rotary instruments inside curved artificial canals. Results indicated that size 06-25 TF instruments showed a significant increase (p < 0.05) in the mean number of cycles to failure when compared with size 06-25 K3 files. Size 06-20 K3 instruments showed no significant increase (p > 0.05) in the mean number of cycles to failure when compared with size 06-20 GT series X instruments. The new manufacturing process produced nickel-titanium rotary files (TF) significantly more resistant to fatigue than instruments produced with the traditional NiTi grinding process. Instruments produced with M-wire (GTX) were not found to be more resistant to fatigue than instruments produced with the traditional NiTi grinding process.
Apparatus and method for producing fragment-free openings
Cherry, Christopher R.
2001-01-01
An apparatus and method for explosively penetrating hardened containers such as steel drums without producing metal fragmentation is disclosed. The apparatus can be used singularly or in combination with water disrupters and other disablement tools. The apparatus is mounted in close proximity to the target and features a main sheet explosive that is initiated at least three equidistant points along the sheet's periphery. A buffer material is placed between the sheet explosive and the target. As a result, the metallic fragments generated from the detonation of the detonator are attenuated so that no fragments from the detonator are transferred to the target. As a result, an opening can be created in containers such as steel drums through which access to the IED is obtained to defuse it with projectiles or fluids.
Laboratory Reactor for Processing Carbon-Containing Sludge
NASA Astrophysics Data System (ADS)
Korovin, I. O.; Medvedev, A. V.
2016-10-01
The paper describes a reactor for high-temperature pyrolysis of carbon-containing sludge with the possibility of further development of environmentally safe technology of hydrocarbon waste disposal to produce secondary products. A solution of the urgent problem has been found: prevention of environmental pollution resulting from oil pollution of soils using the pyrolysis process as a method of disposal of hydrocarbon waste to produce secondary products.
USDA-ARS?s Scientific Manuscript database
RATIONALE: Analysis of bacteria by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) often relies upon sample preparation methods that result in cell lysis, e.g. bead-beating. However, Shiga toxin-producing Escherichia coli (STEC) can undergo bacteriophage...
SU-F-J-200: An Improved Method for Event Selection in Compton Camera Imaging for Particle Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackin, D; Beddar, S; Polf, J
2016-06-15
Purpose: The uncertainty in the beam range in particle therapy limits the conformality of the dose distributions. Compton scatter cameras (CC), which measure the prompt gamma rays produced by nuclear interactions in the patient tissue, can reduce this uncertainty by producing 3D images confirming the particle beam range and dose delivery. However, the high intensity and short time windows of the particle beams limit the number of gammas detected. We attempt to address this problem by developing a method for filtering gamma ray scattering events from the background by applying the known gamma ray spectrum. Methods: We used a 4more » stage Compton camera to record in list mode the energy deposition and scatter positions of gammas from a Co-60 source. Each CC stage contained a 4×4 array of CdZnTe crystal. To produce images, we used a back-projection algorithm and four filtering Methods: basic, energy windowing, delta energy (ΔE), or delta scattering angle (Δθ). Basic filtering requires events to be physically consistent. Energy windowing requires event energy to fall within a defined range. ΔE filtering selects events with the minimum difference between the measured and a known gamma energy (1.17 and 1.33 MeV for Co-60). Δθ filtering selects events with the minimum difference between the measured scattering angle and the angle corresponding to a known gamma energy. Results: Energy window filtering reduced the FWHM from 197.8 mm for basic filtering to 78.3 mm. ΔE and Δθ filtering achieved the best results, FWHMs of 64.3 and 55.6 mm, respectively. In general, Δθ filtering selected events with scattering angles < 40°, while ΔE filtering selected events with angles > 60°. Conclusion: Filtering CC events improved the quality and resolution of the corresponding images. ΔE and Δθ filtering produced similar results but each favored different events.« less
A thesis on the Development of an Automated SWIFT Edge Detection Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Christopher J.
Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extractionmore » can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.« less
NASA Astrophysics Data System (ADS)
Kijko, V. V.; Ofitserov, Evgenii N.
2006-05-01
Thermooptic distortions of the active element of an axially diode-pumped Nd:YVO4 solid-state laser are studied at different methods of its mounting. The study was performed by the Hartmann method. A mathematical model for calculating the optical power of a thermal lens produced in the crystal upon pumping is developed and verified experimentally. It is shown that the optical power of a thermal lens produced upon axial pumping of the convectively cooled active element sealed off in a copper heat sink is half the optical power observed upon convective cooling of the active element without heat sink. The experimental and theoretical results are in good agreement.
Simulations of Coulomb systems with slab geometry using an efficient 3D Ewald summation method
NASA Astrophysics Data System (ADS)
dos Santos, Alexandre P.; Girotto, Matheus; Levin, Yan
2016-04-01
We present a new approach to efficiently simulate electrolytes confined between infinite charged walls using a 3d Ewald summation method. The optimal performance is achieved by separating the electrostatic potential produced by the charged walls from the electrostatic potential of electrolyte. The electric field produced by the 3d periodic images of the walls is constant inside the simulation cell, with the field produced by the transverse images of the charged plates canceling out. The non-neutral confined electrolyte in an external potential can be simulated using 3d Ewald summation with a suitable renormalization of the electrostatic energy, to remove a divergence, and a correction that accounts for the conditional convergence of the resulting lattice sum. The new algorithm is at least an order of magnitude more rapid than the usual simulation methods for the slab geometry and can be further sped up by adopting a particle-particle particle-mesh approach.
Isolation, characterization, and diversity of novel radiotolerant carotenoid-producing bacteria.
Asker, Dalal; Awad, Tarek S; Beppu, Teruhiko; Ueda, Kenji
2012-01-01
Carotenoids are natural pigments that exhibit many biological functions, such as antioxidants (i.e., promote oxidative stress resistance), membrane stabilizers, and precursors for vitamin A. The link between these biological activities and many health benefits (e.g., anticarcinogenic activity, prevention of chronic diseases, etc.) has raised the interest of several industrial sectors, especially in the cosmetics and pharmaceutical industries. The use of microorganisms in biotechnology to produce carotenoids is favorable by consumer and can help meet the growing demand for these bioactive compounds in the food, feed, and pharmaceutical industries. This methodological chapter details the development of a rapid and selective screening method for isolation and identification of carotenoid-producing microorganisms based on UV treatment, sequencing analysis of 16S rRNA genes, and carotenoids' analysis using rapid and effective High-Performance Liquid Chromatography-Diodearray-MS methods. The results of a comprehensive 16S rRNA gene-based phylogenetic analysis revealed a diversity of carotenoid-producing microorganisms (104 isolates) that were isolated at a high frequency from water samples collected at Misasa (Tottori, Japan), a region known for its high natural radioactivity content. These carotenoid-producing isolates were classified into 38 different species belonging to 7 bacterial classes (Flavobacteria, Sphingobacteria, α-Proteobacteria, γ-Proteobacteria, Deinococci, Actinobacteria, and Bacilli). The carotenoids produced by the isolates were zeaxanthin (6 strains), dihydroxyastaxanthin (24 strains), astaxanthin (27 strains), canthaxanthin (10 strains), and unidentified molecular species that were produced by the isolates related to Deinococcus, Exiguobacterium, and Flectobacillus. Here, we describe the methods used to isolate and classify these microorganisms.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Continuous Production of Discrete Plasmid DNA-Polycation Nanoparticles Using Flash Nanocomplexation.
Santos, Jose Luis; Ren, Yong; Vandermark, John; Archang, Maani M; Williford, John-Michael; Liu, Heng-Wen; Lee, Jason; Wang, Tza-Huei; Mao, Hai-Quan
2016-12-01
Despite successful demonstration of linear polyethyleneimine (lPEI) as an effective carrier for a wide range of gene medicine, including DNA plasmids, small interfering RNAs, mRNAs, etc., and continuous improvement of the physical properties and biological performance of the polyelectrolyte complex nanoparticles prepared from lPEI and nucleic acids, there still exist major challenges to produce these nanocomplexes in a scalable manner, particularly for lPEI/DNA nanoparticles. This has significantly hindered the progress toward clinical translation of these nanoparticle-based gene medicine. Here the authors report a flash nanocomplexation (FNC) method that achieves continuous production of lPEI/plasmid DNA nanoparticles with narrow size distribution using a confined impinging jet device. The method involves the complex coacervation of negatively charged DNA plasmid and positive charged lPEI under rapid, highly dynamic, and homogeneous mixing conditions, producing polyelectrolyte complex nanoparticles with narrow distribution of particle size and shape. The average number of plasmid DNA packaged per nanoparticles and its distribution are similar between the FNC method and the small-scale batch mixing method. In addition, the nanoparticles prepared by these two methods exhibit similar cell transfection efficiency. These results confirm that FNC is an effective and scalable method that can produce well-controlled lPEI/plasmid DNA nanoparticles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Continuous Production of Discrete Plasmid DNA-Polycation Nanoparticles Using Flash Nanocomplexation
Santos, Jose Luis; Ren, Yong; Vandermark, John; Archang, Maani M.; Williford, John-Michael; Liu, Heng-wen; Lee, Jason; Wang, Tza-Huei; Mao, Hai-Quan
2016-01-01
Despite successful demonstration of linear polyethyleneimine (lPEI) as an effective carrier for a wide range of gene medicine, including DNA plasmids, small interfering RNAs, mRNAs, etc., and continuous improvement of the physical properties and biological performance of the polyelectrolyte complex nanoparticles prepared from lPEI and nucleic acids, there still exist major challenges to produce these nanocomplexes in a scalable manner, particularly for lPEI/DNA nanoparticles. This has significantly hindered the progress towards clinical translation of these nanoparticle-based gene medicine. Here we report a flash nanocomplexation (FNC) method that achieves continuous production of lPEI/plasmid DNA nanoparticles with narrow size distribution using a confined impinging jet device. The method involves the complex coacervation of negatively charged DNA plasmid and positive charged lPEI under rapid, highly dynamic, and homogeneous mixing conditions, producing polyelectrolyte complex nanoparticles with narrow distribution of particle size and shape. The average number of plasmid DNA packaged per nanoparticles and its distribution are similar between the FNC method and the small-scale batch mixing method. In addition, the nanoparticles prepared by these two methods exhibit similar cell transfection efficiency. These results confirm that FNC is an effective and scalable method that can produce well-controlled lPEI/plasmid DNA nanoparticles. PMID:27717227
Effects of cooking method, cooking oil, and food type on aldehyde emissions in cooking oil fumes.
Peng, Chiung-Yu; Lan, Cheng-Hang; Lin, Pei-Chen; Kuo, Yi-Chun
2017-02-15
Cooking oil fumes (COFs) contain a mixture of chemicals. Of all chemicals, aldehydes draw a great attention since several of them are considered carcinogenic and formation of long-chain aldehydes is related to fatty acids in cooking oils. The objectives of this research were to compare aldehyde compositions and concentrations in COFs produced by different cooking oils, cooking methods, and food types and to suggest better cooking practices. This study compared aldehydes in COFs produced using four cooking oils (palm oil, rapeseed oil, sunflower oil, and soybean oil), three cooking methods (stir frying, pan frying, and deep frying), and two foods (potato and pork loin) in a typical kitchen. Results showed the highest total aldehyde emissions in cooking methods were produced by deep frying, followed by pan frying then by stir frying. Sunflower oil had the highest emissions of total aldehydes, regardless of cooking method and food type whereas rapeseed oil and palm oil had relatively lower emissions. This study suggests that using gentle cooking methods (e.g., stir frying) and using oils low in unsaturated fatty acids (e.g., palm oil or rapeseed oil) can reduce the production of aldehydes in COFs, especially long-chain aldehydes such as hexanal and t,t-2,4-DDE. Copyright © 2016 Elsevier B.V. All rights reserved.
A radial basis function Galerkin method for inhomogeneous nonlocal diffusion
Lehoucq, Richard B.; Rowe, Stephen T.
2016-02-01
We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elias, L.R.
1981-12-01
Results are presented of a three-dimensional numerical analysis of the radiation fields produced in a free-electron laser. The method used here to obtain the spatial and temporal behavior of the radiated fields is based on the coherent superposition of the radiated fields is based on the coherent superposition of the exact Lienard-Wiechert fields produced by each electron in the beam. Interference effects are responsible for the narrow angular radiation patterns obtained and for the high degree of monochromaticity of the radiated fields.
Ellis, Sam; Reader, Andrew J
2018-04-26
Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example, to observe and quantitate changes in functional behaviour in tumors after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalizing voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high-activity lesions. Here, we present two additional novel longitudinal difference-image priors and evaluate their performance using two-dimesional (2D) simulation studies and a three-dimensional (3D) real dataset case study. We have previously proposed a simultaneous difference-image-based penalized maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have (a) low entropy (DE-PML), and (b) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D-simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumor datasets and compared to standard maximum likelihood expectation-maximization (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumor behaviour, and interscan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard reconstructions with increased counts levels. In tumor regions, each method produces subtly different results in terms of preservation of tumor quantitation and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumor means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumor. Across a range of tumor responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalizing difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantitation of mean intensity via DE-PML, or in terms of tumor RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, L; Yin, F; Cai, J
Purpose: To develop a methodology of constructing physiological-based virtual thorax phantom based on hyperpolarized (HP) gas tagging MRI for evaluating deformable image registration (DIR). Methods: Three healthy subjects were imaged at both the end-of-inhalation (EOI) and the end-of-exhalation (EOE) phases using a high-resolution (2.5mm isovoxel) 3D proton MRI, as well as a hybrid MRI which combines HP gas tagging MRI and a low-resolution (4.5mm isovoxel) proton MRI. A sparse tagging displacement vector field (tDVF) was derived from the HP gas tagging MRI by tracking the displacement of tagging grids between EOI and EOE. Using the tDVF and the high-resolution MRmore » images, we determined the motion model of the entire thorax in the following two steps: 1) the DVF inside of lungs was estimated based on the sparse tDVF using a novel multi-step natural neighbor interpolation method; 2) the DVF outside of lungs was estimated from the DIR between the EOI and EOE images (Velocity AI). The derived motion model was then applied to the high-resolution EOI image to create a deformed EOE image, forming the virtual phantom where the motion model provides the ground truth of deformation. Five DIR methods were evaluated using the developed virtual phantom. Errors in DVF magnitude (Em) and angle (Ea) were determined and compared for each DIR method. Results: Among the five DIR methods, free form deformation produced DVF results that are most closely resembling the ground truth (Em=1.04mm, Ea=6.63°). The two DIR methods based on B-spline produced comparable results (Em=2.04mm, Ea=13.66°; and Em =2.62mm, Ea=17.67°), and the two optical-flow methods produced least accurate results (Em=7.8mm; Ea=53.04°; Em=4.45mm, Ea=31.02°). Conclusion: A methodology for constructing physiological-based virtual thorax phantom based on HP gas tagging MRI has been developed. Initial evaluation demonstrated its potential as an effective tool for robust evaluation of DIR in the lung.« less
Guan, Wenna; Zhao, Hui; Lu, Xuefeng; Wang, Cong; Yang, Menglong; Bai, Fali
2011-11-11
Simple and rapid quantitative determination of fatty-acid-based biofuels is greatly important for the study of genetic engineering progress for biofuels production by microalgae. Ideal biofuels produced from biological systems should be chemically similar to petroleum, like fatty-acid-based molecules including free fatty acids, fatty acid methyl esters, fatty acid ethyl esters, fatty alcohols and fatty alkanes. This study founded a gas chromatography-mass spectrometry (GC-MS) method for simultaneous quantification of seven free fatty acids, nine fatty acid methyl esters, five fatty acid ethyl esters, five fatty alcohols and three fatty alkanes produced by wild-type Synechocystis PCC 6803 and its genetically engineered strain. Data obtained from GC-MS analyses were quantified using internal standard peak area comparisons. The linearity, limit of detection (LOD) and precision (RSD) of the method were evaluated. The results demonstrated that fatty-acid-based biofuels can be directly determined by GC-MS without derivation. Therefore, rapid and reliable quantitative analysis of fatty-acid-based biofuels produced by wild-type and genetically engineered cyanobacteria can be achieved using the GC-MS method founded in this work. Copyright © 2011 Elsevier B.V. All rights reserved.
Yoshimura, Tomoaki; Kuribara, Hideo; Kodama, Takashi; Yamata, Seiko; Futo, Satoshi; Watanabe, Satoshi; Aoki, Nobutaro; Iizuka, Tayoshi; Akiyama, Hiroshi; Maitani, Tamio; Naito, Shigehiro; Hino, Akihiro
2005-03-23
Seven types of processed foods, namely, cornstarch, cornmeal, corn puffs, corn chips, tofu, soy milk, and boiled beans, were trial produced from 1 and 5% (w/w) genetically modified (GM) mixed raw materials. In this report, insect resistant maize (MON810) and herbicide tolerant soy (Roundup Ready soy, 40-3-2) were used as representatives of GM maize and soy, respectively. Deoxyribonucleic acid (DNA) was extracted from the raw materials and the trial-produced processed food using two types of methods, i.e., the silica membrane method and the anion exchange method. The GM% values of these samples were quantified, and the significant differences between the raw materials and the trial-produced processed foods were statistically confirmed. There were some significant differences in the comparisons of all processed foods. However, our quantitative methods could be applied as a screening assay to tofu and soy milk because the differences in GM% between the trial-produced processed foods and their raw materials were lower than 13 and 23%, respectively. In addition, when quantitating with two primer pairs (SSIIb 3, 114 bp; SSIIb 4, 83 bp for maize and Le1n02, 118 bp; Le1n03, 89 bp for soy), which were targeted within the same taxon specific DNA sequence with different amplicon sizes, the ratios of the copy numbers of the two primer pairs (SSIIb 3/4 and Le1n02/03) decreased with time in a heat-treated processing model using an autoclave. In this report, we suggest that the degradation level of DNA in processed foods could be estimated from these ratios, and the probability of GM quantification could be experimentally predicted from the results of the trial producing.
Chen, B; Zhao, X; Inoue, S; Ando, Y
2010-06-01
In this work, we produced SWNTs by a hydrogen DC arc discharge with evaporation of carbon anode containing 1 at% Fe catalyst in H2-Ar mixture gas. This was named as FH-arc discharge method. The as-grown SWNTs synthesized by FH-arc discharge method have high crystallinity. An oxidation purification process of as-grown SWNTs with H2O2 has been developed to remove the coexisting Fe catalyst nanoparticles. As a result, SWNTs with purity higher than 90 at% have been achieved. To exhibit remarkable characteristics, CNTs should be separated from the bundles and kept in homogeneous and stable suspensions. For this purpose, the SWNTs prepared by FH-arc discharge method also have been treated by Nanomizer process with some surfactants. SPM images showed that the SWNTs bundles had become thinner and shorter.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
The extinction law from photometric data: linear regression methods
NASA Astrophysics Data System (ADS)
Ascenso, J.; Lombardi, M.; Lada, C. J.; Alves, J.
2012-04-01
Context. The properties of dust grains, in particular their size distribution, are expected to differ from the interstellar medium to the high-density regions within molecular clouds. Since the extinction at near-infrared wavelengths is caused by dust, the extinction law in cores should depart from that found in low-density environments if the dust grains have different properties. Aims: We explore methods to measure the near-infrared extinction law produced by dense material in molecular cloud cores from photometric data. Methods: Using controlled sets of synthetic and semi-synthetic data, we test several methods for linear regression applied to the specific problem of deriving the extinction law from photometric data. We cover the parameter space appropriate to this type of observations. Results: We find that many of the common linear-regression methods produce biased results when applied to the extinction law from photometric colors. We propose and validate a new method, LinES, as the most reliable for this effect. We explore the use of this method to detect whether or not the extinction law of a given reddened population has a break at some value of extinction. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (ESO programmes 069.C-0426 and 074.C-0728).
NASA Astrophysics Data System (ADS)
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
"Other" indirect methods for nuclear astrophysics
NASA Astrophysics Data System (ADS)
Trache, Livius
2018-01-01
In the house of Trojan Horse Method (THM), I will say a few words about "other" indirect methods we use in Nuclear Physics for Astrophysics. In particular those using Rare Ion Beams that can be used to evaluate radiative proton capture reactions. I add words about work done with the Professore we celebrate today. With a proposal, and some results with TECSA, for a simple method to produce and use isomeric beam of 26mAl.
Millimeter Wave Holographical Inspection of Honeycomb Composites
NASA Technical Reports Server (NTRS)
Case, J. T.; Kharkovsky, S.; Zoughi, R.; Stefes, G.; Hepburn, Frank L.; Hepburn, Frank L.
2007-01-01
Multi-layered composite structures manufactured with honeycomb, foam or balsa wood cores are finding increasing utility in a variety of aerospace, transportation, and infrastructure applications. Due to the low conductivity and inhomogeneity associated with these composites standard nondestructive testing (NDT) methods are not always capable of inspecting their interior for various defects caused during the manufacturing process or as a result of in-service loading. On the contrary, microwave and millimeter wave NDT methods are well-suited for inspecting these structures since signals at these frequencies readily penetrate through these structures and reflect from different interior boundaries revealing the presence of a wide range of defects such as disbond, delamination, moisture and oil intrusion, impact damage, etc. Millimeter wave frequency spectrum spans 30 GHz - 300 GHz with corresponding wavelengths of 10 - 1 mm. Due to the inherent short wavelengths at these frequencies, one can produce high spatial resolution images of these composites either using real-antenna focused or synthetic-aperture focused methods. In addition, incorporation of swept-frequency in the latter method (i.e., holography) results in high-resolution three-dimensional images. This paper presents the basic steps behind producing such images at millimeter wave frequencies and the results of two honeycomb composite panels are demonstrated at Q-band (33-50 GHz). In addition, these results are compared to previous results using X-ray computed tomography.
Millimeter Wave Holographical Inspection of Honeycomb Composites
NASA Astrophysics Data System (ADS)
Case, J. T.; Kharkovsky, S.; Zoughi, R.; Steffes, G.; Hepburn, F. L.
2008-02-01
Multi-layered composite structures manufactured with honeycomb, foam, or balsa wood cores are finding increasing utility in a variety of aerospace, transportation, and infrastructure applications. Due to the low conductivity and inhomogeneity associated with these composites, standard nondestructive testing (NDT) methods are not always capable of inspecting their interior for various defects caused during the manufacturing process or as a result of in-service loading. On the contrary, microwave and millimeter wave NDT methods are well-suited for inspecting these structures since signals at these frequencies readily penetrate through these structures and reflect from different interior boundaries revealing the presence of a wide range of defects such as isband, delamination, moisture and oil intrusion, impact damage, etc. Millimeter wave frequency spectrum spans 30 GHz-300 GHz with corresponding wavelengths of 10-1 mm. Due to the inherent short wavelengths at these frequencies, one can produce high spatial resolution images of these composites either using real-antenna focused or synthetic-aperture focused methods. In addition, incorporation of swept-frequency in the latter method (i.e., holography) results in high-resolution three-dimensional images. This paper presents the basic steps behind producing such images at millimeter wave frequencies and the results of two honeycomb composite panels are demonstrated at Q-band (33-50 GHz). In addition, these results are compared to previous results using X-ray computed tomography.
ERIC Educational Resources Information Center
Harnan, Sue Elizabeth; Cooper, Katy; Jones, Sarah Lynne; Jones, Elaine
2015-01-01
Full systematic reviews are time and resource heavy. We describe a method successfully used to produce a rapid review of yoga for health and wellbeing, with limited resources, using mapping methods. Inclusion and exclusion criteria were developed a priori and refined "post hoc," with the review team blind to the study results to minimise…
Improving automated disturbance maps using snow-covered landsat time series stacks
Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen
2012-01-01
Snow-covered winter Landsat time series stacks are used to develop a nonforest mask to enhance automated disturbance maps produced by the Vegetation Change Tracker (VCT). This method exploits the enhanced spectral separability between forested and nonforested areas that occurs with sufficient snow cover. This method resulted in significant improvements in Vegetation...
Impact of Group Exams in a Graduate Intermediate Accounting Class
ERIC Educational Resources Information Center
Bay, Darlene; Pacharn, Parunchana
2017-01-01
Cooperative learning techniques have been found to be quite successful in a variety of learning environments. However, in university-level accounting courses, investigations of the efficacy of cooperative learning pedagogical methods have produced mixed results at best. To continue the search for a cooperative learning method that is effective in…
Stable, fertile, high polyhydroxyalkanoate producing plants and methods of producing them
Bohmert-Tatarev, Karen; McAvoy, Susan; Peoples, Oliver P.; Snell, Kristi D.
2015-08-04
Transgenic plants that produce high levels of polyhydroxybutyrate and methods of producing them are provided. In a preferred embodiment the transgenic plants are produced using plastid transformation technologies and utilize genes which are codon optimized. Stably transformed plants able to produce greater than 10% dwt PHS in tissues are also provided.
Greenwald, Ralf R.; Quitadamo, Ian J.
2014-01-01
A changing undergraduate demographic and the need to help students develop advanced critical thinking skills in neuroanatomy courses has prompted many faculty to consider new teaching methods including clinical case studies. This study compared primarily conventional and inquiry-based clinical case (IBCC) teaching methods to determine which would produce greater gains in critical thinking and content knowledge. Results showed students in the conventional neuroanatomy course gained less than 3 national percentile ranks while IBCC students gained over 7.5 within one academic term using the valid and reliable California Critical Thinking Skills Test. In addition to 2.5 times greater gains in critical thinking, IBCC teaching methods also produced 12% greater final exam performance and 11% higher grades using common grade performance benchmarks. Classroom observations also indicated that IBCC students were more intellectually engaged and participated to a greater extent in classroom discussions. Through the results of this study, it is hoped that faculty who teach neuroanatomy and desire greater critical thinking and content student learning outcomes will consider using the IBCC method. PMID:24693256
Greenwald, Ralf R; Quitadamo, Ian J
2014-01-01
A changing undergraduate demographic and the need to help students develop advanced critical thinking skills in neuroanatomy courses has prompted many faculty to consider new teaching methods including clinical case studies. This study compared primarily conventional and inquiry-based clinical case (IBCC) teaching methods to determine which would produce greater gains in critical thinking and content knowledge. Results showed students in the conventional neuroanatomy course gained less than 3 national percentile ranks while IBCC students gained over 7.5 within one academic term using the valid and reliable California Critical Thinking Skills Test. In addition to 2.5 times greater gains in critical thinking, IBCC teaching methods also produced 12% greater final exam performance and 11% higher grades using common grade performance benchmarks. Classroom observations also indicated that IBCC students were more intellectually engaged and participated to a greater extent in classroom discussions. Through the results of this study, it is hoped that faculty who teach neuroanatomy and desire greater critical thinking and content student learning outcomes will consider using the IBCC method.
NASA Astrophysics Data System (ADS)
Tugiman; Ariani, F.; Taher, F.; Hasibuan, M. S.; Suprianto
2017-12-01
Palm oil processing industries are very attractive because they offer plenty products with high economic value. The CPO factory processes not only produces crude palm oil but also generates fly ash (FA) particles waste in its final process. The purpose of this investigation to analyze and increase the benefits of particles as reinforcement materials for fabricating aluminum matrix composites (AMC’s) by different casting route. Stirring, centrifugal and squeeze casting method was conducted in this study. Further, the chemical composition of FA particles, densities and mechanical properties have been analyzed. The characteristics of composite material were investigated using an Optical microscope, scanning electron microscope (SEM), hardness (Brinell), impact strength (Charpy). The pin on disc method was used to measure the wear rate. The results show that SiO2, Fe2O3, and Al2O3 are the main compounds of fly ash particles. These particles enhanced the hardness and reduce wear resistance of aluminum matrix composites. The squeeze method gives better results than stir and centrifugal casting.
NASA Technical Reports Server (NTRS)
Kharkovsky, S.; Case, J. T.; Zoughi, R.; Hepburn, F.
2005-01-01
The Space Shuttle Columbia's catastrophic accident emphasizes the growing need for developing and applying effective, robust and life-cycle oriented nondestructive testing (NDT) methods for inspecting the shuttle external fuel tank spray on foam insulation (SOFI) and its protective acreage heat tiles. Millimeter wave NDT techniques were one of the methods chosen for evaluating their potential for inspecting these structures. Several panels with embedded anomalies (mainly voids) were produced and tested for this purpose. Near-field and far-field millimeter wave NDT methods were used for producing millimeter wave images of the anomalies in SOFI panel and heat tiles. This paper presents the results of an investigation for the purpose of detecting localized anomalies in two SOFI panels and a set of heat tiles. To this end, reflectometers at a relatively wide range of frequencies (Ka-band (26.5 - 40 GHz) to W-band (75 - 110 GHz)) and utilizing different types of radiators were employed. The results clearly illustrate the utility of these methods for this purpose.
Mangolim, Camila Sampaio; da Silva, Thamara Thaiane; Fenelon, Vanderson Carvalho; Koga, Luciana Numata; Ferreira, Sabrina Barbosa de Souza; Bruschi, Marcos Luciano; Matioli, Graciette
2017-01-01
Curdlan is a linear polysaccharide considered a dietary fiber and with gelation properties. This study evaluated the structure, morphology and the physicochemical and technological properties of curdlan produced by Agrobacterium sp. IFO 13140 recovered by pre-gelation and precipitation methods. Commercial curdlan submitted or otherwise to the pre-gelation process was also evaluated. The data obtained from structural analysis revealed a similarity between the curdlan produced by Agrobacterium sp. IFO 13140 (recovered by both methods) and the commercial curdlans. The results showed that the curdlans evaluated differed significantly in terms of dispersibility and gelation, and only the pre-gelled ones had significant potential for food application, because this method influence on the size of the particles and in the presence of NaCl. In terms of technological properties, the curdlan produced by Agrobacterium sp. IFO 13140 (pre-gelation method) had a greater water and oil holding capacity (64% and 98% greater, respectively) and a greater thickening capacity than the pre-gelled commercial curdlan. The pre-gelled commercial curdlan displayed a greater gelling capacity at 95°C than the others. When applied to food, only the pre-gelled curdlans improved the texture parameters of yogurts and reduced syneresis. The curdlan gels, which are rigid and stable in structure, demonstrated potential for improving the texture of food products, with potential industrial use. PMID:28245244
Mangolim, Camila Sampaio; Silva, Thamara Thaiane da; Fenelon, Vanderson Carvalho; Koga, Luciana Numata; Ferreira, Sabrina Barbosa de Souza; Bruschi, Marcos Luciano; Matioli, Graciette
2017-01-01
Curdlan is a linear polysaccharide considered a dietary fiber and with gelation properties. This study evaluated the structure, morphology and the physicochemical and technological properties of curdlan produced by Agrobacterium sp. IFO 13140 recovered by pre-gelation and precipitation methods. Commercial curdlan submitted or otherwise to the pre-gelation process was also evaluated. The data obtained from structural analysis revealed a similarity between the curdlan produced by Agrobacterium sp. IFO 13140 (recovered by both methods) and the commercial curdlans. The results showed that the curdlans evaluated differed significantly in terms of dispersibility and gelation, and only the pre-gelled ones had significant potential for food application, because this method influence on the size of the particles and in the presence of NaCl. In terms of technological properties, the curdlan produced by Agrobacterium sp. IFO 13140 (pre-gelation method) had a greater water and oil holding capacity (64% and 98% greater, respectively) and a greater thickening capacity than the pre-gelled commercial curdlan. The pre-gelled commercial curdlan displayed a greater gelling capacity at 95°C than the others. When applied to food, only the pre-gelled curdlans improved the texture parameters of yogurts and reduced syneresis. The curdlan gels, which are rigid and stable in structure, demonstrated potential for improving the texture of food products, with potential industrial use.
Zheng, Lu; Gao, Naiyun; Deng, Yang
2012-01-01
It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
Wellbore stability in oil and gas drilling with chemical-mechanical coupling.
Yan, Chuanliang; Deng, Jingen; Yu, Baohua
2013-01-01
Wellbore instability in oil and gas drilling is resulted from both mechanical and chemical factors. Hydration is produced in shale formation owing to the influence of the chemical property of drilling fluid. A new experimental method to measure diffusion coefficient of shale hydration is given, and the calculation method of experimental results is introduced. The diffusion coefficient of shale hydration is measured with the downhole temperature and pressure condition, then the penetration migrate law of drilling fluid filtrate around the wellbore is calculated. Furthermore, the changing rules of shale mechanical properties affected by hydration and water absorption are studied through experiments. The relationships between shale mechanical parameters and the water content are established. The wellbore stability model chemical-mechanical coupling is obtained based on the experimental results. Under the action of drilling fluid, hydration makes the shale formation softened and produced the swelling strain after drilling. This will lead to the collapse pressure increases after drilling. The study results provide a reference for studying hydration collapse period of shale.
Wellbore Stability in Oil and Gas Drilling with Chemical-Mechanical Coupling
Deng, Jingen
2013-01-01
Wellbore instability in oil and gas drilling is resulted from both mechanical and chemical factors. Hydration is produced in shale formation owing to the influence of the chemical property of drilling fluid. A new experimental method to measure diffusion coefficient of shale hydration is given, and the calculation method of experimental results is introduced. The diffusion coefficient of shale hydration is measured with the downhole temperature and pressure condition, then the penetration migrate law of drilling fluid filtrate around the wellbore is calculated. Furthermore, the changing rules of shale mechanical properties affected by hydration and water absorption are studied through experiments. The relationships between shale mechanical parameters and the water content are established. The wellbore stability model chemical-mechanical coupling is obtained based on the experimental results. Under the action of drilling fluid, hydration makes the shale formation softened and produced the swelling strain after drilling. This will lead to the collapse pressure increases after drilling. The study results provide a reference for studying hydration collapse period of shale. PMID:23935430
Kroeger, D.M.; Hsu, H.S.; Brynestad, J.
1995-03-07
Metal oxide superconductor powder precursors are prepared in an aerosol pyrolysis process. A solution of the metal cations is introduced into a furnace at 600--1,000 C for 0.1 to 60 seconds. The process produces micron to submicron size powders without the usual loss of the lead stabilizer. The resulting powders have a narrow particle size distribution, a small grain size, and are readily converted to a superconducting composition upon subsequent heat treatment. The precursors are placed in a metal body deformed to form a wire or tape and heated to form a superconducting article. The fine powders permit a substantial reduction in heat treatment time, thus enabling a continuous processing of the powders into superconducting wire, tape or multifilamentary articles by the powder-in-tube process. 3 figs.
Kroeger, Donald M.; Hsu, Huey S.; Brynestad, Jorulf
1995-01-01
Metal oxide superconductor powder precursors are prepared in an aerosol pyrolysis process. A solution of the metal cations is introduced into a furnace at 600.degree.-1000.degree. C. for 0.1 to 60 seconds. The process produces micron to submicron size powders without the usual loss of the lead stabilizer. The resulting powders have a narrow particle size distribution, a small grain size, and are readily converted to a superconducting composition upon subsequent heat treatment. The precursors are placed in a metal body deformed to form a wire or tape and heated to form a superconducting article. The fine powders permit a substantial reduction in heat treatment time, thus enabling a continuous processing of the powders into superconducting wire, tape or multifilamentary articles by the powder-in-tube process.
Fabrication of porous titanium scaffold materials by a fugitive filler method.
Hong, T F; Guo, Z X; Yang, R
2008-12-01
A clean powder metallurgy route was developed here to produce Ti foams, using a fugitive polymeric filler, polypropylene carbonate (PPC), to create porosities in a metal-polymer compact at the pre-processing stage. The as-produced foams were studied by scanning electron microscopy (SEM), LECO combustion analyses and X-ray diffraction (XRD). Compression tests were performed to assess their mechanical properties. The results show that titanium foams with open pores can be successfully produced by the method. The compressive strength and modulus of the foams decrease with an increasing level of porosity and can be tailored to those of the human bones. After alkali treatment and soaking in a simulated body fluid (SBF) for 3 days, a thin apatite layer was formed along the Ti foam surfaces, which provides favourable bioactive conditions for bone bonding and growth.
Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A
2016-08-01
The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data basis for multivariate analysis methods, equivalent to data resulting from chromatographic separations. The alternative evaluation of very large data series based on linear regression analysis produced information equivalent to results obtained through application of PCA an CA. Copyright © 2016 Elsevier B.V. All rights reserved.
Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.
2017-01-01
With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.
Symetrica Measurements at PNNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouzes, Richard T.; Mace, Emily K.; Redding, Rebecca L.
2009-01-26
Symetrica is a small company based in Southampton, England, that has developed an algorithm for processing gamma ray spectra obtained from a variety of scintillation detectors. Their analysis method applied to NaI(Tl), BGO, and LaBr spectra results in deconvoluted spectra with the “resolution” improved by about a factor of three to four. This method has also been applied by Symetrica to plastic scintillator with the result that full energy peaks are produced. If this method is valid and operationally viable, it could lead to a significantly improved plastic scintillator based radiation portal monitor system.
A meta-analysis of an implicit measure of personality functioning: the Mutuality of Autonomy Scale.
Graceffo, Robert A; Mihura, Joni L; Meyer, Gregory J
2014-01-01
The Mutuality of Autonomy scale (MA) is a Rorschach variable designed to capture the degree to which individuals mentally represent self and other as mutually autonomous versus pathologically destructive (Urist, 1977). Discussions of the MA's validity found in articles and chapters usually claim good support, which we evaluated by a systematic review and meta-analysis of its construct validity. Overall, in a random effects analysis across 24 samples (N = 1,801) and 91 effect sizes, the MA scale was found to maintain a relationship of r =.20, 95% CI [.16,.25], with relevant validity criteria. We hypothesized that MA summary scores that aggregate more MA response-level data would maintain the strongest relationship with relevant validity criteria. Results supported this hypothesis (aggregated scoring method: r =.24, k = 57, S = 24; nonaggregated scoring methods: r =.15, k = 34, S = 10; p =.039, 2-tailed). Across 7 exploratory moderator analyses, only 1 (criterion method) produced significant results. Criteria derived from the Thematic Apperception Test produced smaller effects than clinician ratings, diagnostic differentiation, and self-attributed characteristics; criteria derived from observer reports produced smaller effects than clinician ratings and self-attributed characteristics. Implications of the study's findings are discussed in terms of both research and clinical work.
Eight piece quadrupole magnet, method for aligning quadrupole magent pole tips
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaski, Mark S.; Liu, Jie; Donnelly, Aric T.
The invention provides an alternative to the standard 2-piece or 4-piece quadrupole. For example, an 8-piece and a 10-piece quadrupole are provided whereby the tips of each pole may be adjustable. Also provided is a method for producing a quadrupole using standard machining techniques but which results in a final tolerance accuracy of the resulting construct which is better than that obtained using standard machining techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serate, Jose; Xie, Dan; Pohlmann, Edward
Microbial conversion of lignocellulosic feedstocks into biofuels remains an attractive means to produce sustainable energy. It is essential to produce lignocellulosic hydrolysates in a consistent manner in order to study microbial performance in different feedstock hydrolysates. Because of the potential to introduce microbial contamination from the untreated biomass or at various points during the process, it can be difficult to control sterility during hydrolysate production. In this study, we compared hydrolysates produced from AFEX-pretreated corn stover and switchgrass using two different methods to control contamination: either by autoclaving the pretreated feedstocks prior to enzymatic hydrolysis, or by introducing antibiotics duringmore » the hydrolysis of non-autoclaved feedstocks. We then performed extensive chemical analysis, chemical genomics, and comparative fermentations to evaluate any differences between these two different methods used for producing corn stover and switchgrass hydrolysates. Autoclaving the pretreated feedstocks could eliminate the contamination for a variety of feedstocks, whereas the antibiotic gentamicin was unable to control contamination consistently during hydrolysis. Compared to the addition of gentamicin, autoclaving of biomass before hydrolysis had a minimal effect on mineral concentrations, and showed no significant effect on the two major sugars (glucose and xylose) found in these hydrolysates. However, autoclaving elevated the concentration of some furanic and phenolic compounds. Chemical genomics analyses using Saccharomyces cerevisiae strains indicated a high correlation between the AFEX-pretreated hydrolysates produced using these two methods within the same feedstock, indicating minimal differences between the autoclaving and antibiotic methods. Comparative fermentations with S. cerevisiae and Zymomonas mobilis also showed that autoclaving the AFEX-pretreated feedstocks had no significant effects on microbial performance in these hydrolysates. In conclusion, our results showed that autoclaving the pretreated feedstocks offered advantages over the addition of antibiotics for hydrolysate production. The autoclaving method produced a more consistent quality of hydrolysate.« less
Serate, Jose; Xie, Dan; Pohlmann, Edward; ...
2015-11-14
Microbial conversion of lignocellulosic feedstocks into biofuels remains an attractive means to produce sustainable energy. It is essential to produce lignocellulosic hydrolysates in a consistent manner in order to study microbial performance in different feedstock hydrolysates. Because of the potential to introduce microbial contamination from the untreated biomass or at various points during the process, it can be difficult to control sterility during hydrolysate production. In this study, we compared hydrolysates produced from AFEX-pretreated corn stover and switchgrass using two different methods to control contamination: either by autoclaving the pretreated feedstocks prior to enzymatic hydrolysis, or by introducing antibiotics duringmore » the hydrolysis of non-autoclaved feedstocks. We then performed extensive chemical analysis, chemical genomics, and comparative fermentations to evaluate any differences between these two different methods used for producing corn stover and switchgrass hydrolysates. Autoclaving the pretreated feedstocks could eliminate the contamination for a variety of feedstocks, whereas the antibiotic gentamicin was unable to control contamination consistently during hydrolysis. Compared to the addition of gentamicin, autoclaving of biomass before hydrolysis had a minimal effect on mineral concentrations, and showed no significant effect on the two major sugars (glucose and xylose) found in these hydrolysates. However, autoclaving elevated the concentration of some furanic and phenolic compounds. Chemical genomics analyses using Saccharomyces cerevisiae strains indicated a high correlation between the AFEX-pretreated hydrolysates produced using these two methods within the same feedstock, indicating minimal differences between the autoclaving and antibiotic methods. Comparative fermentations with S. cerevisiae and Zymomonas mobilis also showed that autoclaving the AFEX-pretreated feedstocks had no significant effects on microbial performance in these hydrolysates. In conclusion, our results showed that autoclaving the pretreated feedstocks offered advantages over the addition of antibiotics for hydrolysate production. The autoclaving method produced a more consistent quality of hydrolysate.« less
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin
2016-01-01
Objectives We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. Methods We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. Results An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. Conclusions The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis. PMID:27525165
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1997-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
Screening For Alcohol-Producing Microbes
NASA Technical Reports Server (NTRS)
Schubert, Wayne W.
1988-01-01
Dye reaction rapidly identifies alcohol-producing microbial colonies. Method visually detects alcohol-producing micro-organisms, and distinguishes them from other microbial colonies that do not produce alcohol. Method useful for screening mixed microbial populations in environmental samples.
Richter, Jack; McFarland, Lela; Bredfeldt, Christine
2012-01-01
Background/Aims Integrating data across systems can be a daunting process. The traditional method of moving data to a common location, mapping fields with different formats and meanings, and performing data cleaning activities to ensure valid and reliable integration across systems can be both expensive and extremely time consuming. As the scope of needed research data increases, the traditional methodology may not be sustainable. Data Virtualization provides an alternative to traditional methods that may reduce the effort required to integrate data across disparate systems. Objective Our goal was to survey new methods in data integration, cloud computing, enterprise data management and virtual data management for opportunities to increase the efficiency of producing VDW and similar data sets. Methods Kaiser Permanente Information Technology (KPIT), in collaboration with the Mid-Atlantic Permanente Research Institute (MAPRI) reviewed methodologies in the burgeoning field of Data Virtualization. We identified potential strengths and weaknesses of new approaches to data integration. For each method, we evaluated its potential application for producing effective research data sets. Results Data Virtualization provides opportunities to reduce the amount of data movement required to integrate data sources on different platforms in order to produce research data sets. Additionally, Data Virtualization also includes methods for managing “fuzzy” matching used to match fields known to have poor reliability such as names, addresses and social security numbers. These methods could improve the efficiency of integrating state and federal data such as patient race, death, and tumors with internal electronic health record data. Discussion The emerging field of Data Virtualization has considerable potential for increasing the efficiency of producing research data sets. An important next step will be to develop a proof of concept project that will help us understand to benefits and drawbacks of these techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Albert F., E-mail: wagner@anl.gov; Dawes, Richard; Continetti, Robert E.
The measured H(D)OCO survival fractions of the photoelectron-photofragment coincidence experiments by the Continetti group are qualitatively reproduced by tunneling calculations to H(D) + CO{sub 2} on several recent ab initio potential energy surfaces for the HOCO system. The tunneling calculations involve effective one-dimensional barriers based on steepest descent paths computed on each potential energy surface. The resulting tunneling probabilities are converted into H(D)OCO survival fractions using a model developed by the Continetti group in which every oscillation of the H(D)-OCO stretch provides an opportunity to tunnel. Four different potential energy surfaces are examined with the best qualitative agreement with experimentmore » occurring for the PIP-NN surface based on UCCSD(T)-F12a/AVTZ electronic structure calculations and also a partial surface constructed for this study based on CASPT2/AVDZ electronic structure calculations. These two surfaces differ in barrier height by 1.6 kcal/mol but when matched at the saddle point have an almost identical shape along their reaction paths. The PIP surface is a less accurate fit to a smaller ab initio data set than that used for PIP-NN and its computed survival fractions are somewhat inferior to PIP-NN. The LTSH potential energy surface is the oldest surface examined and is qualitatively incompatible with experiment. This surface also has a small discontinuity that is easily repaired. On each surface, four different approximate tunneling methods are compared but only the small curvature tunneling method and the improved semiclassical transition state method produce useful results on all four surfaces. The results of these two methods are generally comparable and in qualitative agreement with experiment on the PIP-NN and CASPT2 surfaces. The original semiclassical transition state theory method produces qualitatively incorrect tunneling probabilities on all surfaces except the PIP. The Eckart tunneling method uses the least amount of information about the reaction path and produces too high a tunneling probability on PIP-NN surface, leading to survival fractions that peak at half their measured values.« less
Bondi, Mark W.; Edmonds, Emily C.; Jak, Amy J.; Clark, Lindsay R.; Delano-Wood, Lisa; McDonald, Carrie R.; Nation, Daniel A.; Libon, David J.; Au, Rhoda; Galasko, Douglas; Salmon, David P.
2014-01-01
We compared two methods of diagnosing mild cognitive impairment (MCI): conventional Petersen/Winblad criteria as operationalized by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an actuarial neuropsychological method put forward by Jak and Bondi designed to balance sensitivity and reliability. 1,150 ADNI participants were diagnosed at baseline as cognitively normal (CN) or MCI via ADNI criteria (MCI: n = 846; CN: n = 304) or Jak/Bondi criteria (MCI: n = 401; CN: n = 749), and the two MCI samples were submitted to cluster and discriminant function analyses. Resulting cluster groups were then compared and further examined for APOE allelic frequencies, cerebrospinal fluid (CSF) Alzheimer’s disease (AD) biomarker levels, and clinical outcomes. Results revealed that both criteria produced a mildly impaired Amnestic subtype and a more severely impaired Dysexecutive/Mixed subtype. The neuropsychological Jak/Bondi criteria uniquely yielded a third Impaired Language subtype, whereas conventional Petersen/Winblad ADNI criteria produced a third subtype comprising nearly one-third of the sample that performed within normal limits across the cognitive measures, suggesting this method’s susceptibility to false positive diagnoses. MCI participants diagnosed via neuropsychological criteria yielded dissociable cognitive phenotypes, significant CSF AD biomarker associations, more stable diagnoses, and identified greater percentages of participants who progressed to dementia than conventional MCI diagnostic criteria. Importantly, the actuarial neuropsychological method did not produce a subtype that performed within normal limits on the cognitive testing, unlike the conventional diagnostic method. Findings support the need for refinement of MCI diagnoses to incorporate more comprehensive neuropsychological methods, with resulting gains in empirical characterization of specific cognitive phenotypes, biomarker associations, stability of diagnoses, and prediction of progression. Refinement of MCI diagnostic methods may also yield gains in biomarker and clinical trial study findings because of improvements in sample compositions of ‘true positive’ cases and removal of ‘false positive’ cases. PMID:24844687
Mapping thunder sources by inverting acoustic and electromagnetic observations
NASA Astrophysics Data System (ADS)
Anderson, J. F.; Johnson, J. B.; Arechiga, R. O.; Thomas, R. J.
2014-12-01
We present a new method of locating current flow in lightning strikes by inversion of thunder recordings constrained by Lightning Mapping Array observations. First, radio frequency (RF) pulses are connected to reconstruct conductive channels created by leaders. Then, acoustic signals that would be produced by current flow through each channel are forward modeled. The recorded thunder is considered to consist of a weighted superposition of these acoustic signals. We calculate the posterior distribution of acoustic source energy for each channel with a Markov Chain Monte Carlo inversion that fits power envelopes of modeled and recorded thunder; these results show which parts of the flash carry current and produce thunder. We examine the effects of RF pulse location imprecision and atmospheric winds on quality of results and apply this method to several lightning flashes over the Magdalena Mountains in New Mexico, USA. This method will enable more detailed study of lightning phenomena by allowing researchers to map current flow in addition to leader propagation.
USDA-ARS?s Scientific Manuscript database
Introduction: Shiga toxin-producing E. coli of serogroups O26, O45, O103, O111, O121 and O145 are a testing concern for the beef industry and regulators. Numerous tests are available that attempt to predict the presence of these pathogens. However, potential positive results require culture confir...
Diazo processing of LANDSAT imagery: A low-cost instructional technique
NASA Technical Reports Server (NTRS)
Lusch, D. P.
1981-01-01
Diazo processing of LANDSAT imagery is a relatively simple and cost effective method of producing enhanced renditions of the visual LANDSAT products. This technique is capable of producing a variety of image enhancements which have value in a teaching laboratory environment. Additionally, with the appropriate equipment, applications research which relys on accurate and repeatable results is possible. Exposure and development equipment options, diazo materials, and enhancement routines are discussed.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2015-10-24
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Runaway electron beam control for longitudinally pumped metal vapor lasers
NASA Astrophysics Data System (ADS)
Kolbychev, G. V.; Kolbycheva, P. D.
1995-08-01
Physics and techniques for producing of the pulsed runaway electron beams are considered. The main obstacle for increasing electron energies in the beams is revealed to be a self- breakdown of the e-gun's gas-filled diode. Two methods to suppress the self-breakdown and enhance the volumetric discharge producing the e-beam are offered and examined. Each of them provides 1.5 fold increase of the ceiling potential on the gun. The methods also give the ways to control several guns simultaneously. Resulting in the possibility of realizing the powerful longitudinal pumping of metal-vapor lasers on self-terminated transitions of atoms or ions.
NASA Astrophysics Data System (ADS)
Ayala, Alejandro; Hentschinski, Martin; Jalilian-Marian, Jamal; Tejeda-Yeomans, Maria Elena
2017-07-01
We use the spinor helicity formalism to calculate the cross section for production of three partons of a given polarization in Deep Inelastic Scattering (DIS) off proton and nucleus targets at small Bjorken x. The target proton or nucleus is treated as a classical color field (shock wave) from which the produced partons scatter multiple times. We reported our result for the final expression for the production cross section and studied the azimuthal angular correlations of the produced partons in [1]. Here we provide the full details of the calculation of the production cross section using the spinor helicity methods.
Methods for improving solar cell open circuit voltage
Jordan, John F.; Singh, Vijay P.
1979-01-01
A method for producing a solar cell having an increased open circuit voltage. A layer of cadmium sulfide (CdS) produced by a chemical spray technique and having residual chlorides is exposed to a flow of hydrogen sulfide (H.sub.2 S) heated to a temperature of 400.degree.-600.degree. C. The residual chlorides are reduced and any remaining CdCl.sub.2 is converted to CdS. A heterojunction is formed over the CdS and electrodes are formed. Application of chromium as the positive electrode results in a further increase in the open circuit voltage available from the H.sub.2 S-treated solar cell.
Legese, Melese Hailu; Weldearegay, Gebru Mulugeta; Asrat, Daniel
2017-01-01
Background Infections by extended-spectrum beta-lactamase- (ESBL) and carbapenem-resistant Enterobacteriaceae (CRE) are an emerging problem in children nowadays. Hence, the aim of this study was to determine the prevalence of ESBL- and carbapenemase-producing Enterobacteriaceae among children suspected of septicemia and urinary tract infections (UTIs). Methods A cross-sectional study was conducted from January to March 2014. A total of 322 study participants suspected of septicemia and UTIs were recruited. All blood and urine samples were cultured on blood and MacConkey agar. All positive cultures were characterized by colony morphology, Gram stain, and standard biochemical tests. Antimicrobial susceptibility test was performed on Muller-Hinton agar using disk diffusion. ESBL was detected using combination disk and double-disk synergy methods, and the results were compared. Carbapenemase was detected by modified Hodge method using meropenem. Data were analyzed using SPSS version 20. Results The overall prevalence of ESBL- and carbapenemase-producing Enterobacteriaceae was 78.57% (n=22/28) and 12.12%, respectively. Among the Enterobacteriaceae tested, Klebsiella pneumoniae (84.2%, n=16/19), Escherichia coli (100%, n=5/5), and Klebsiella oxytoca (100%, n=1/1) were positive for ESBL. Double-disk synergy method showed 90.9% sensitivity, 66.7% specificity, 95.2% positive predictive value, and 50% negative predictive value. Carbapenemase-producing Enterobacteriaceae were K. pneumoniae (9.09%, n=3/33) and Morganella morganii (3.03%, n=1/33). Conclusion Screening Enterobacteriaceae for ESBL production is essential for better antibiotics selection and preventing its further emergence and spread. In resource-limited settings, double-disk synergy method can be implemented for screening and confirming ESBL production. Moreover, occurrence of CRE in countries where no carbapenems are sold is worrying microbiologists as well as clinicians. Hence, identifying factors that induce carbapenemase production in the absence of carbapenems prescription is essential for control of CRE dissemination within the community. PMID:28182124
NASA Astrophysics Data System (ADS)
Purnamayati, L.; Dewi, EN; Kurniasih, R. A.
2018-02-01
Phycocyanin is natural blue colorant which easily damages by heat. The inlet temperature of spray dryer is an important parameter representing the feature of the microcapsules.The aim of this study was to investigate the phycocyanin stability of microcapsules made from Spirulina sp with maltodextrin and κ-Carrageenan as the coating material, processed by spray drying method in different inlet temperature. Microcapsules were processed in three various inlet temperaturei.e. 90°C, 110°C, and 130°C, respectively. The results indicated that phycocyanin microcapsule with 90°C of inlet temperature produced the highest moisture content, phycocyanin concentration and encapsulation efficiency of 3,5%, 1,729% and 29,623%, respectively. On the other hand, the highest encapsulation yield was produced by 130°C of theinlet temperature of 29,48% and not significantly different with 110°C. The results of Scanning Electron Microscopy (SEM) showed that phycocyanin microcapsules with 110°C of inlet temperature produced the most rounded shape. To sum up, 110°C was the best inlet temperature to phycocyanin microencapsulation by the spray dryer.
Toske, Steven G; McConnell, Jennifer B; Brown, Jaclyn L; Tuten, Jennifer M; Miller, Erin E; Phillips, Monica Z; Vazquez, Etienne R; Lurie, Ira S; Hays, Patrick A; Guest, Elizabeth M
2017-03-01
A trace processing impurity found in certain methamphetamine exhibits was isolated and identified as trans-N-methyl-4-methyl-5-phenyl-4-penten-2-amine hydrochloride (1). It was determined that this impurity was produced via reductive amination of trans-4-methyl-5-phenyl-4-penten-2-one (4), which was one of a cluster of related ketones generated during the synthesis of 1-phenyl-2-propanone (P2P) from phenylacetic acid and lead (II) acetate. This two-step sequence resulted in methamphetamine containing elevated levels of 1. In contrast, methamphetamine produced from P2P made by other methods produced insignificant (ultra-trace or undetectable) amounts of 1. These results confirm that 1 is a synthetic marker compound for the phenylacetic acid and lead (II) acetate method. Analytical data for 1 and 4, and a postulated mechanism for the production of 4, are presented. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Method and system for producing hydrogen using sodium ion separation membranes
Bingham, Dennis N; Klingler, Kerry M; Turner, Terry D; Wilding, Bruce M; Frost, Lyman
2013-05-21
A method of producing hydrogen from sodium hydroxide and water is disclosed. The method comprises separating sodium from a first aqueous sodium hydroxide stream in a sodium ion separator, feeding the sodium produced in the sodium ion separator to a sodium reactor, reacting the sodium in the sodium reactor with water, and producing a second aqueous sodium hydroxide stream and hydrogen. The method may also comprise reusing the second aqueous sodium hydroxide stream by combining the second aqueous sodium hydroxide stream with the first aqueous sodium hydroxide stream. A system of producing hydrogen is also disclosed.
Using high hydraulic conductivity nodes to simulate seepage lakes
Anderson, Mary P.; Hunt, Randall J.; Krohelski, James T.; Chung, Kuopo
2002-01-01
In a typical ground water flow model, lakes are represented by specified head nodes requiring that lake levels be known a priori. To remove this limitation, previous researchers assigned high hydraulic conductivity (K) values to nodes that represent a lake, under the assumption that the simulated head at the nodes in the high-K zone accurately reflects lake level. The solution should also produce a constant water level across the lake. We developed a model of a simple hypothetical ground water/lake system to test whether solutions using high-K lake nodes are sensitive to the value of K selected to represent the lake. Results show that the larger the contrast between the K of the aquifer and the K of the lake nodes, the smaller the error tolerance required for the solution to converge. For our test problem, a contrast of three orders of magnitude produced a head difference across the lake of 0.005 m under a regional gradient of the order of 10−3 m/m, while a contrast of four orders of magnitude produced a head difference of 0.001 m. The high-K method was then used to simulate lake levels in Pretty Lake, Wisconsin. Results for both the hypothetical system and the application to Pretty Lake compared favorably with results using a lake package developed for MODFLOW (Merritt and Konikow 2000). While our results demonstrate that the high-K method accurately simulates lake levels, this method has more cumbersome postprocessing and longer run times than the same problem simulated using the lake package.
A Metal-Free Method for Producing MRI Contrast at Amyloid-Beta
Hilt, Silvia; Tang, Tang; Walton, Jeffrey H.; Budamagunta, Madhu; Maezawa, Izumi; Kálai, Tamás; Hideg, Kálmán; Singh, Vikrant; Wulff, Heike; Gong, Qizhi; Jin, Lee-Way; Louie, Angelique; Voss, John C.
2017-01-01
Alzheimer’s disease (AD) is characterized by depositions of the amyloid-β (Aβ) peptide in the brain. The disease process develops over decades, with substantial neurological loss occurring before a clinical diagnosis of dementia can be rendered. It is therefore imperative to develop methods that permit early detection and monitoring of disease progression. In addition, the multifactorial pathogenesis of AD has identified several potential avenues for AD intervention. Thus, evaluation of therapeutic candidates over lengthy trial periods also demands a practical, noninvasive method for measuring Aβ in the brain. Magnetic resonance imaging (MRI) is the obvious choice for such measurements, but contrast enhancement for Aβ has only been achieved using Gd(III)-based agents. There is great interest in gadolinium-free methods to image the brain. In this study, we provide the first demonstration that a nitroxide-based small-molecule produces MRI contrast in brain specimens with elevated levels of Aβ. The molecule is comprised of a fluorene (a molecule with high affinity for Aβ) and a nitroxide spin label (a paramagnetic MRI contrast species). Labeling of brain specimens with the spin-labeled fluorene produces negative contrast in samples from AD model mice whereas no negative contrast is seen in specimens harvested from wild-type mice. Injection of SLF into live mice resulted in good brain penetration, with the compound able to generate contrast 24-hr post injection. These results provide a proof of concept method that can be used for early, noninvasive, gadolinium-free detection of amyloid plaques by magnetic resonance imaging (MRI). PMID:27911291
Feature Selection for Classification of Polar Regions Using a Fuzzy Expert System
NASA Technical Reports Server (NTRS)
Penaloza, Mauel A.; Welch, Ronald M.
1996-01-01
Labeling, feature selection, and the choice of classifier are critical elements for classification of scenes and for image understanding. This study examines several methods for feature selection in polar regions, including the list, of a fuzzy logic-based expert system for further refinement of a set of selected features. Six Advanced Very High Resolution Radiometer (AVHRR) Local Area Coverage (LAC) arctic scenes are classified into nine classes: water, snow / ice, ice cloud, land, thin stratus, stratus over water, cumulus over water, textured snow over water, and snow-covered mountains. Sixty-seven spectral and textural features are computed and analyzed by the feature selection algorithms. The divergence, histogram analysis, and discriminant analysis approaches are intercompared for their effectiveness in feature selection. The fuzzy expert system method is used not only to determine the effectiveness of each approach in classifying polar scenes, but also to further reduce the features into a more optimal set. For each selection method,features are ranked from best to worst, and the best half of the features are selected. Then, rules using these selected features are defined. The results of running the fuzzy expert system with these rules show that the divergence method produces the best set features, not only does it produce the highest classification accuracy, but also it has the lowest computation requirements. A reduction of the set of features produced by the divergence method using the fuzzy expert system results in an overall classification accuracy of over 95 %. However, this increase of accuracy has a high computation cost.
Ren, Yan; Qiu, Yi; Wan, De-Guang; Lu, Xian-Ming; Guo, Jin-Lin
2013-05-01
To analyze the content and type of soluble proteins in Cordyceps sinensis from different producing areas and processed with different methods with bradford method and 2-DE technology, in order to discover significant differences in soluble proteins in C. sinensis processed with different methods and from different producing areas. The preliminary study indicated that the content and diversity of soluble proteins were related to producing areas and processing methods to some extent.
M, Jeya
2014-01-01
Introduction:Pseudomonas aeruginosa is a frequent colonizer of hospitalized patients. They are responsible for serious infections such as meningitis, urological infections, septicemia and pneumonia. Carbapenem resistance of Pseudomonas aeruginosa is currently increasingly reported which is often mediated by production of metallo-β-lactamase (MBL). Multidrug resistant Pseudomonas aeruginosa isolates may involve reduced cell wall permeability, production of chromosomal and plasmid mediated β lactamases, aminoglycosides modifying enzymes and an active multidrug efflux mechanism. Objective: This study is aimed to detect the presence and the nature of plasmids among metallo-β-lactamase producing Pseudomonas aeruginosa isolates. Also to detect the presence of bla VIM gene from these isolates. Materials and Methods: Clinical isolates of Pseudomonas aeruginosa showing the metalo-β-lactamase enzyme (MBL) production were isolated. The MBL production was confirmed by three different methods. From the MBL producing isolates plasmid extraction was done by alkaline lysis method. Plasmid positive isolates were subjected for blaVIM gene detection by PCR method. Results: Two thousand seventy six clinical samples yielded 316 (15.22%) Pseudomonas aeruginosa isolates, out of which 141 (44.62%) were multidrug resistant. Among them 25 (17.73%) were metallo-β-lactamase enzyme producers. Plasmids were extracted from 18 out of 25 isolates tested. Five out of 18 isolates were positive for the blaVIM gene detection by the PCR amplification. Conclusion: The MBL producers were susceptible to polymyxin /colistin with MIC ranging from 0.5 – 2μg/ml. Molecular detection of specific genes bla VIM were positive among the carbapenem resistant isolates. PMID:25120980
Slow-rotation dynamic SPECT with a temporal second derivative constraint.
Humphries, T; Celler, A; Trummer, M
2011-08-01
Dynamic tracer behavior in the human body arises as a result of continuous physiological processes. Hence, the change in tracer concentration within a region of interest (ROI) should follow a smooth curve. The authors propose a modification to an existing slow-rotation dynamic SPECT reconstruction algorithm (dSPECT) with the goal of improving the smoothness of time activity curves (TACs) and other properties of the reconstructed image. The new method, denoted d2EM, imposes a constraint on the second derivative (concavity) of the TAC in every voxel of the reconstructed image, allowing it to change sign at most once. Further constraints are enforced to prevent other nonphysical behaviors from arising. The new method is compared with dSPECT using digital phantom simulations and experimental dynamic 99mTc -DTPA renal SPECT data, to assess any improvement in image quality. In both phantom simulations and healthy volunteer experiments, the d2EM method provides smoother TACs than dSPECT, with more consistent shapes in regions with dynamic behavior. Magnitudes of TACs within an ROI still vary noticeably in both dSPECT and d2EM images, but also in images produced using an OSEM approach that reconstructs each time frame individually, based on much more complete projection data. TACs produced by averaging over a region are similar using either method, even for small ROIs. Results for experimental renal data show expected behavior in images produced by both methods, with d2EM providing somewhat smoother mean TACs and more consistent TAC shapes. The d2EM method is successful in improving the smoothness of time activity curves obtained from the reconstruction, as well as improving consistency of TAC shapes within ROIs.
Comparison of prosthetic models produced by traditional and additive manufacturing methods.
Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul
2015-08-01
The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.
Method for production of carbon nanofiber mat or carbon paper
Naskar, Amit K.
2015-08-04
Method for the preparation of a non-woven mat or paper made of carbon fibers, the method comprising carbonizing a non-woven mat or paper preform (precursor) comprised of a plurality of bonded sulfonated polyolefin fibers to produce said non-woven mat or paper made of carbon fibers. The preforms and resulting non-woven mat or paper made of carbon fiber, as well as articles and devices containing them, and methods for their use, are also described.
From Waste to Wealth: Using Produced Water for Agriculture in Colorado
NASA Astrophysics Data System (ADS)
Dolan, F.; Hogue, T. S.
2017-12-01
According to estimates from the Colorado Water Plan, the state's population may double by 2050. Due to increasing demand, as much as 0.8 million irrigated acres may dry up statewide from agricultural to municipal and industrial transfers. To help mitigate this loss, new sources of water are being explored in Colorado. One such source may be produced water. Oil and gas production in 2016 alone produced over 300 million barrels of produced water. Currently, the most common method of disposal of produced water is deep well injection, which is costly and has been shown to cause induced seismicity. Treating this water to agricultural standards eliminates the need to dispose of this water and provides a new source of water. This research explores which counties in Colorado may be best suited to reusing produced water for agriculture based on a combined index of need, quality of produced water, and quantity of produced water. The volumetric impact of using produced water for agricultural needs is determined for the top six counties. Irrigation demand is obtained using evapotranspiration estimates from a range of methods, including remote sensing products and ground-based observations. The economic feasibility of treating produced water to irrigation standards is also determined using treatment costs found in the literature and disposal costs in each county. Finally, data from the IHS database is used to obtain the ratio between hydraulic fracturing fluid volumes and produced water volumes in each county. The results of this research will aid in the transition between viewing produced water as a waste product and using it as a tool to help secure water for the arid West.
Laboratory Demonstration of Low-Cost Method for Producing Thin Film on Nonconductors.
ERIC Educational Resources Information Center
Ebong, A. U.; And Others
1991-01-01
A low-cost procedure for metallizing a silicon p-n junction diode by electroless nickel plating is reported. The procedure demonstrates that expensive salts can be excluded without affecting the results. The experimental procedure, measurement, results, and discussion are included. (Author/KR)
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-07-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Improved patch-based learning for image deblurring
NASA Astrophysics Data System (ADS)
Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng
2015-05-01
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Cloud-Induced Uncertainty for Visual Navigation
2014-12-26
images at the pixel level. The result is a method that can overlay clouds with various structures on top of any desired image to produce realistic...cloud-shaped structures . The primary contribution of this research, however, is to investigate and quantify the errors in features due to clouds. The...of clouds types, this method does not emulate the true structure of clouds. An alternative popular modern method of creating synthetic clouds is known
Do enteric neurons make hypocretin? ☆
Baumann, Christian R.; Clark, Erika L.; Pedersen, Nigel P.; Hecht, Jonathan L.; Scammell, Thomas E.
2008-01-01
Hypocretins (orexins) are wake-promoting neuropeptides produced by hypothalamic neurons. These hypocretin-producing cells are lost in people with narcolepsy, possibly due to an autoimmune attack. Prior studies described hypocretin neurons in the enteric nervous system, and these cells could be an additional target of an autoimmune process. We sought to determine whether enteric hypocretin neurons are lost in narcoleptic subjects. Even though we tried several methods (including whole mounts, sectioned tissue, pre-treatment of mice with colchicine, and the use of various primary antisera), we could not identify hypocretin-producing cells in enteric nervous tissue collected from mice or normal human subjects. These results raise doubts about whether enteric neurons produce hypocretin. PMID:18191238
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Koottathape, Natthavoot; Takahashi, Hidekazu; Finger, Wernerj; Kanehira, Masafumi; Iwasaki, Naohiko; Aoyagi, Yujin
2012-06-01
Although attritive and abrasive wear of recent composite resins has been substantially reduced, in vitro wear testing with reasonably simulating devices and quantitative determination of resulting wear is still needed. Three-dimensional scanning methods are frequently used for this purpose. The aim of this trial was to compare maximum depth of wear and volume loss of composite samples, evaluated with a contact profilometer and a non-contact CCD camera imaging system, respectively. Twenty-three random composite specimens with wear traces produced in a ball-on-disc sliding device, using poppy seed slurry and PMMA suspension as third-body media, were evaluated with the contact profilometer (TalyScan 150, Taylor Hobson LTD, Leicester, UK) and with the digital CCD microscope (VHX1000, KEYENCE, Osaka, Japan). The target parameters were maximum depth of the wear and volume loss.Results - The individual time of measurement needed with the non-contact CCD method was almost three hours less than that with the contact method. Both, maximum depth of wear and volume loss data, recorded with the two methods were linearly correlated (r(2) > 0.97; p < 0.01). The contact scanning method and the non-contact CCD method are equally suitable for determination of maximum depth of wear and volume loss of abraded composite resins.
Boron-carbide-aluminum and boron-carbide-reactive metal cermets
Halverson, Danny C.; Pyzik, Aleksander J.; Aksay, Ilhan A.
1986-01-01
Hard, tough, lightweight boron-carbide-reactive metal composites, particularly boron-carbide-aluminum composites, are produced. These composites have compositions with a plurality of phases. A method is provided, including the steps of wetting and reacting the starting materials, by which the microstructures in the resulting composites can be controllably selected. Starting compositions, reaction temperatures, reaction times, and reaction atmospheres are parameters for controlling the process and resulting compositions. The ceramic phases are homogeneously distributed in the metal phases and adhesive forces at ceramic-metal interfaces are maximized. An initial consolidation step is used to achieve fully dense composites. Microstructures of boron-carbide-aluminum cermets have been produced with modulus of rupture exceeding 110 ksi and fracture toughness exceeding 12 ksi.sqroot.in. These composites and methods can be used to form a variety of structural elements.
Comparison of Virtual Oscillator and Droop Control: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Dhople, Sairaj
Virtual oscillator control and droop control are two techniques that can be used to ensure synchronization and power sharing of parallel inverters in islanded operation. VOC relies on the implementation of non-linear Van der Pol oscillator equations in the control system of the inverter, acting upon the time-domain instantaneous inverter current and terminal voltage. On the other hand, DC explicitly computes active and reactive power produced by the inverter and relies on limited bandwidth low-pass filters. Even though both methods can be engineered to produce the same steady-state characteristics, their dynamic performances are significantly different. This paper presents analytical andmore » experimental results that aim to compare both methods. It is shown that VOC is inherently faster and enables minimizing the circulating currents. The results are verified using three 120V, 1kW inverters.« less
Locating dayside magnetopause reconnection with exhaust ion distributions
NASA Astrophysics Data System (ADS)
Broll, J. M.; Fuselier, S. A.; Trattner, K. J.
2017-05-01
Magnetic reconnection at Earth's dayside magnetopause is essential to magnetospheric dynamics. Determining where reconnection takes place is important to understanding the processes involved, and many questions about reconnection location remain unanswered. We present a method for locating the magnetic reconnection X line at Earth's dayside magnetopause under southward interplanetary magnetic field conditions using only ion velocity distribution measurements. Particle-in-cell simulations based on Cluster magnetopause crossings produce ion velocity distributions that we propagate through a model magnetosphere, allowing us to calculate the field-aligned distance between an exhaust observation and its associated reconnection line. We demonstrate this procedure for two events and compare our results with those of the Maximum Magnetic Shear Model; we find good agreement with its results and show that when our method is applicable, it produces more precise locations than the Maximum Shear Model.
Boron-carbide-aluminum and boron-carbide-reactive metal cermets. [B/sub 4/C-Al
Halverson, D.C.; Pyzik, A.J.; Aksay, I.A.
1985-05-06
Hard, tough, lighweight boron-carbide-reactive metal composites, particularly boron-carbide-aluminum composites, are produced. These composites have compositions with a plurality of phases. A method is provided, including the steps of wetting and reacting the starting materials, by which the microstructures in the resulting composites can be controllably selected. Starting compositions, reaction temperatures, reaction times, and reaction atmospheres are parameters for controlling the process and resulting compositions. The ceramic phases are homogeneously distributed in the metal phases and adhesive forces at ceramic-metal interfaces are maximized. An initial consolidated step is used to achieve fully dense composites. Microstructures of boron-carbide-aluminum cermets have been produced with modules of rupture exceeding 110 ksi and fracture toughness exceeding 12 ksi..sqrt..in. These composites and methods can be used to form a variety of structural elements.
Performance of three reflectance calibration methods for airborne hyperspectral spectrometer data.
Miura, Tomoaki; Huete, Alfredo R
2009-01-01
In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The "reflectance mode (RM)" method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The "linear-interpolation (LI)" method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The "continuous panel (CP)" method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements.
Chen, Xiaoxia; Zhao, Jing; Chen, Tianshu; Gao, Tao; Zhu, Xiaoli; Li, Genxi
2018-01-01
Comprehensive analysis of the expression level and location of tumor-associated membrane proteins (TMPs) is of vital importance for the profiling of tumor cells. Currently, two kinds of independent techniques, i.e. ex situ detection and in situ imaging, are usually required for the quantification and localization of TMPs respectively, resulting in some inevitable problems. Methods: Herein, based on a well-designed and fluorophore-labeled DNAzyme, we develop an integrated and facile method, in which imaging and quantification of TMPs in situ are achieved simultaneously in a single system. The labeled DNAzyme not only produces localized fluorescence for the visualization of TMPs but also catalyzes the cleavage of a substrate to produce quantitative fluorescent signals that can be collected from solution for the sensitive detection of TMPs. Results: Results from the DNAzyme-based in situ imaging and quantification of TMPs match well with traditional immunofluorescence and western blotting. In addition to the advantage of two-in-one, the DNAzyme-based method is highly sensitivity, allowing the detection of TMPs in only 100 cells. Moreover, the method is nondestructive. Cells after analysis could retain their physiological activity and could be cultured for other applications. Conclusion: The integrated system provides solid results for both imaging and quantification of TMPs, making it a competitive method over some traditional techniques for the analysis of TMPs, which offers potential application as a toolbox in the future.
Spatial Access to Primary Care Providers in Appalachia
Donohoe, Joseph; Marshall, Vince; Tan, Xi; Camacho, Fabian T.; Anderson, Roger T.; Balkrishnan, Rajesh
2016-01-01
Purpose: The goal of this research was to examine spatial access to primary care physicians in Appalachia using both traditional access measures and the 2-step floating catchment area (2SFCA) method. Spatial access to care was compared between urban and rural regions of Appalachia. Methods: The study region included Appalachia counties of Pennsylvania, Ohio, Kentucky, and North Carolina. Primary care physicians during 2008 and total census block group populations were geocoded into GIS software. Ratios of county physicians to population, driving time to nearest primary care physician, and various 2SFCA approaches were compared. Results: Urban areas of the study region had shorter travel times to their closest primary care physician. Provider to population ratios produced results that varied widely from one county to another because of strict geographic boundaries. The 2SFCA method produced varied results depending on the distance decay weight and variable catchment size techniques chose. 2SFCA scores showed greater access to care in urban areas of Pennsylvania, Ohio, and North Carolina. Conclusion: The different parameters of the 2SFCA method—distance decay weights and variable catchment sizes—have a large impact on the resulting spatial access to primary care scores. The findings of this study suggest that using a relative 2SFCA approach, the spatial access ratio method, when detailed patient travel data are unavailable. The 2SFCA method shows promise for measuring access to care in Appalachia, but more research on patient travel preferences is needed to inform implementation. PMID:26906524
Image processing enhancement of high-resolution TEM micrographs of nanometer-size metal particles
NASA Technical Reports Server (NTRS)
Artal, P.; Avalos-Borja, M.; Soria, F.; Poppa, H.; Heinemann, K.
1989-01-01
The high-resolution TEM detectability of lattice fringes from metal particles supported on substrates is impeded by the substrate itself. Single value decomposition (SVD) and Fourier filtering (FFT) methods were applied to standard high resolution micrographs to enhance lattice resolution from particles as well as from crystalline substrates. SVD produced good results for one direction of fringes, and it can be implemented as a real-time process. Fourier methods are independent of azimuthal directions and allow separation of particle lattice planes from those pertaining to the substrate, which makes it feasible to detect possible substrate distortions produced by the supported particle. This method, on the other hand, is more elaborate, requires more computer time than SVD and is, therefore, less likely to be used in real-time image processing applications.
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.
2017-01-01
During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.
Using an interference spectrum as a short-range absolute rangefinder with fiber and wideband source
NASA Astrophysics Data System (ADS)
Hsieh, Tsung-Han; Han, Pin
2018-06-01
Recently, a new type of displacement instrument using spectral-interference has been found, which utilizes fiber and a wideband light source to produce an interference spectrum. In this work, we develop a method that measures the absolute air-gap distance by taking wavelengths at two interference spectra minima. The experimental results agree with the theoretical calculations. It is also utilized to produce and control the spectral switch, which is much easier than other previous methods using other control mechanisms. A scanning mode of this scheme for stepped surface measurement is suggested, which is verified by a standard thickness gauge test. Our scheme is different to one available on the market that may use a curve-fitting method, and some comparisons are made between our scheme and that one.
Simultaneous optimization method for absorption spectroscopy postprocessing.
Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T
2015-05-10
A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
Implementation of the common phrase index method on the phrase query for information retrieval
NASA Astrophysics Data System (ADS)
Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah
2017-08-01
As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.
Modeling IrisCode and its variants as convex polyhedral cones and its security implications.
Kong, Adams Wai-Kin
2013-03-01
IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet
2014-01-01
Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE.
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Bioactive lipids in the butter production chain from Parmigiano Reggiano cheese area.
Verardo, Vito; Gómez-Caravaca, Ana M; Gori, Alessandro; Losi, Giuseppe; Caboni, Maria F
2013-11-01
Bovine milk contains hundreds of diverse components, including proteins, peptides, amino acids, lipids, lactose, vitamins and minerals. Specifically, the lipid composition is influenced by different variables such as breed, feed and technological process. In this study the fatty acid and phospholipid compositions of different samples of butter and its by-products from the Parmigiano Reggiano cheese area, produced by industrial and traditional churning processes, were determined. The fatty acid composition of samples manufactured by the traditional method showed higher levels of monounsaturated and polyunsaturated fatty acids compared with industrial samples. In particular, the contents of n-3 fatty acids and conjugated linoleic acids were higher in samples produced by the traditional method than in samples produced industrially. Sample phospholipid composition also varied between the two technological processes. Phosphatidylethanolamine was the major phospholipid in cream, butter and buttermilk samples obtained by the industrial process as well as in cream and buttermilk samples from the traditional process, while phosphatidylcholine was the major phospholipid in traditionally produced butter. This result may be explained by the different churning processes causing different types of membrane disruption. Generally, samples produced traditionally had higher contents of total phospholipids; in particular, butter produced by the traditional method had a total phospholipid content 33% higher than that of industrially produced butter. The samples studied represent the two types of products present in the Parmigiano Reggiano cheese area, where the industrial churning process is widespread compared with the traditional processing of Reggiana cow's milk. This is because Reggiana cow's milk production is lower than that of other breeds and the traditional churning process is time-consuming and economically disadvantageous. However, its products have been demonstrated to contain more bioactive lipids compared with products obtained from other breeds and by the industrial process. © 2013 Society of Chemical Industry.
Chuang, Ya-Hui; Zhang, Yingjie; Zhang, Wei; Boyd, Stephen A; Li, Hui
2015-07-24
Land application of biosolids and irrigation with reclaimed water in agricultural production could result in accumulation of pharmaceuticals in vegetable produce. To better assess the potential human health impact from long-term consumption of pharmaceutical-contaminated vegetables, it is important to accurately quantify the amount of pharmaceuticals accumulated in vegetables. In this study, a quick, easy, cheap, effective, rugged and safe (QuEChERS) method was developed and optimized to extract multiple classes of pharmaceuticals from vegetables, which were subsequently quantified by liquid chromatography coupled to tandem mass spectrometry. For the eleven target pharmaceuticals in celery and lettuce, the extraction recovery of the QuEChERS method ranged from 70.1 to 118.6% with relative standard deviation <20%, and the method detection limit was achieved at the levels of nanograms of pharmaceuticals per gram of vegetables. The results revealed that the performance of the QuEChERS method was comparable to, or better than that of accelerated solvent extraction (ASE) method for extraction of pharmaceuticals from plants. The two optimized extraction methods were applied to quantify the uptake of pharmaceuticals by celery and lettuce growing hydroponically. The results showed that all the eleven target pharmaceuticals could be absorbed by the vegetables from water. Compared to the ASE method, the QuEChERS method offers the advantages of short time and reduced costs of sample preparation, and less amount of organic solvents used. The established QuEChERS method could be used to determine the accumulation of multiple classes of pharmaceutical residues in vegetables and other plants, which is needed to evaluate the quality and safety of agricultural produce consumed by humans. Copyright © 2015 Elsevier B.V. All rights reserved.
Hydrogen absorption induced metal deposition on palladium and palladium-alloy particles
Wang, Jia X [East Setauket, NY; Adzic, Radoslav R [East Setauket, NY
2009-03-24
The present invention relates to methods for producing metal-coated palladium or palladium-alloy particles. The method includes contacting hydrogen-absorbed palladium or palladium-alloy particles with one or more metal salts to produce a sub-monoatomic or monoatomic metal- or metal-alloy coating on the surface of the hydrogen-absorbed palladium or palladium-alloy particles. The invention also relates to methods for producing catalysts and methods for producing electrical energy using the metal-coated palladium or palladium-alloy particles of the present invention.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Ma, Minyan; Zhang, Xiao-ai; Zhang, Ze-yu; Saleh, Sayed M.; Wang, Xu-dong
2017-06-01
Surface PEGylation is essential for preventing non-specific binding of biomolecules when silica nanoparticles are utilized for in vivo applications. Methods for installing poly(ethylene glycol) on a silica surface have been widely explored but varies from study to study. Because there is a lack of a satisfactory method for evaluating the properties of silica surface after PEGylation, the prepared nanoparticles are not fully characterized before use. In some cases, even non-PEGylated silica nanoparticles were produced, which is unfortunately not recognized by the end-user. In this work, a fluorescent protein was employed, which acts as a sensitive material for evaluating the surface protein adsorption properties of silica nanoparticles. Eleven different methods were systematically investigated for their reaction efficiency towards surface PEGylation. Results showed that both reaction conditions (including pH, catalyst) and surface functional groups of parent silica nanoparticles play critical roles in producing fully PEGylated silica nanoparticles. Great care needs to be taken in choosing the proper coupling chemistry for surface PEGylation. The data and method shown here will guarantee high-quality PEGylated silica nanoparticles to be produced and guide their applications in biology, chemistry, industry and medicine.
Bedenić, B; Boras, A
2001-01-01
The plasmid-mediated extended-spectrum beta-lactamases (ESBL) confer resistance to oxymino-cephalosporins, such as cefotaxime, ceftazidime, and ceftriaxone and to monobactams such as aztreonam. It is well known fact that ESBL producing bacteria exhibit a pronounced inoculum effect against broad spectrum cephalosporins like ceftazidime, cefotaxime, ceftriaxone and cefoperazone. The aim of this investigation was to determine the effect of inoculum size on the sensitivity and specificity of double-disk synergy test (DDST) which is the test most frequently used for detection of ESBLs, in comparison with other two methods (determination of ceftazidime MIC with and without clavulanate and inhibitor potentiated disk-diffusion test) which are seldom used in clinical laboratories. The experiments were performed on a set of K. pneumoniae strains with previously characterized beta-lactamases which comprise: 10 SHV-5 beta-lactamase producing K. pneumoniae, 20 SHV-2 + 1 SHV 2a beta-lactamase producing K. pneumoniae, 7 SHV-12 beta-lactamase producing K. pneumoniae, 39 putative SHV ESBL producing K. pneumoniae and 26 K. pneumoniae isolates highly susceptible to ceftazidime according to Kirby-Bauer disk-diffusion method and thus considered to be ESBL negative. According to the results of this investigation, increase in inoculum size affected more significantly the sensitivity of DDST than of other two methods. The sensitivity of the DDST was lower when a higher inoculum size of 10(8) CFU/ml was applied, in distinction from other two methods (MIC determination and inhibitor potentiated disk-diffusion test) which retained high sensitivity regardless of the density of bacterial suspension. On the other hand, DDST displayed higher specificity compared to other two methods regardless of the inoculum size. This investigation found that DDST is a reliable method but it is important to standardize the inoculum size.
Rong, Nan; Shan, Baoqing; Wang, Chao
2016-01-01
A study coupling sedimentcore incubation and microelectrode measurementwas performed to explore the sediment oxygen demand (SOD) at 16 stations in the Ziya River Watershed, a severely polluted and anoxic river system in the north of China. Total oxygen flux values in the range 0.19–1.41 g/(m2·d) with an average of 0.62 g/(m2·d) were obtained by core incubations, and diffusive oxygen flux values in the range 0.15–1.38 g/(m2·d) with an average of 0.51 g/(m2·d) were determined by microelectrodes. Total oxygen flux obviously correlated with diffusive oxygen flux (R2 = 0.842). The microelectrode method produced smaller results than the incubation method in 15 of 16 sites, and the diffusive oxygen flux was smaller than the total oxygen flux. Although the two sets of SOD values had significant difference accepted by the two methods via the Wilcoxon signed-rank test (p < 0.05), the microelectrode method was shown to produce results that were similar to those from the core incubation method. The microelectrode method, therefore, could be used as an alternative method for traditional core incubation method, or as a method to verify SOD rates measured by other methods. We consider that high potential sediment oxygen demand would occur in the Ziya River Watershed when the dissolved oxygen (DO) recovered in the overlying water. PMID:26907307
O'Hara, F. Patrick; Suaya, Jose A.; Ray, G. Thomas; Baxter, Roger; Brown, Megan L.; Mera, Robertino M.; Close, Nicole M.; Thomas, Elizabeth
2016-01-01
A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants. PMID:26669861
O'Hara, F Patrick; Suaya, Jose A; Ray, G Thomas; Baxter, Roger; Brown, Megan L; Mera, Robertino M; Close, Nicole M; Thomas, Elizabeth; Amrine-Madsen, Heather
2016-01-01
A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants.
Identifying biologically relevant putative mechanisms in a given phenotype comparison
Hanoudi, Samer; Donato, Michele; Draghici, Sorin
2017-01-01
A major challenge in life science research is understanding the mechanism involved in a given phenotype. The ability to identify the correct mechanisms is needed in order to understand fundamental and very important phenomena such as mechanisms of disease, immune systems responses to various challenges, and mechanisms of drug action. The current data analysis methods focus on the identification of the differentially expressed (DE) genes using their fold change and/or p-values. Major shortcomings of this approach are that: i) it does not consider the interactions between genes; ii) its results are sensitive to the selection of the threshold(s) used, and iii) the set of genes produced by this approach is not always conducive to formulating mechanistic hypotheses. Here we present a method that can construct networks of genes that can be considered putative mechanisms. The putative mechanisms constructed by this approach are not limited to the set of DE genes, but also considers all known and relevant gene-gene interactions. We analyzed three real datasets for which both the causes of the phenotype, as well as the true mechanisms were known. We show that the method identified the correct mechanisms when applied on microarray datasets from mouse. We compared the results of our method with the results of the classical approach, showing that our method produces more meaningful biological insights. PMID:28486531
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Young, David W
2015-11-01
Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.
Improving Alpha Spectrometry Energy Resolution by Ion Implantation with ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dion, Michael P.; Liezers, Martin; Farmer, Orville T.
2015-01-01
We report results of a novel technique using an Inductively Coupled Plasma Mass Spectrometer (ICP-MS) as a method of source preparation for alpha spectrometry. This method produced thin, contaminant free 241Am samples which yielded extraordinary energy resolution which appear to be at the lower limit of the detection technology used in this research.
Rossi, Eliandra M.; Beilke, Luniele; Kochhann, Marília; Sarzi, Diana H.; Tondo, Eduardo C.
2016-01-01
Salmonella Enteritidis SE86 is an important foodborne pathogen in Southern Brazil and it is able to produce a biosurfactant. However, the importance of this compound for the microorganism is still unknown. This study aimed to investigate the influence of the biosurfactant produced by S. Enteritidis SE86 on adherence to slices of lettuce leaves and on resistance to sanitizers. First, lettuce leaves were inoculated with S. Enteritidis SE86 in order to determine the amount of biosurfactant produced. Subsequently, lettuce leaves were inoculated with S. Enteritidis SE86 with and without the biosurfactant, and the adherence and bacterial resistance to different sanitization methods were evaluated. S. Enteritidis SE86 produced biosurfactant after 16 h (emulsification index of 11 to 52.15 percent, P < 0.05) and showed greater adherence capability and resistance to sanitization methods when the compound was present. The scanning electron microscopy demonstrated that S. Enteritidis was able to adhere, form lumps, and invade the lettuce leaves’ stomata in the presence of the biosurfactant. Results indicated that the biosurfactant produced by S. Enteritidis SE86 contributed to adherence and increased resistance to sanitizers when the microorganism was present on lettuce leaves. PMID:26834727
Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory
NASA Technical Reports Server (NTRS)
Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.
2013-01-01
Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.
Lipinski, B.A.; Sams, J.I.; Smith, B.D.; Harbert, W.
2008-01-01
Production of methane from thick, extensive coal beds in the Powder River Basin of Wyoming has created water management issues. Since development began in 1997, more than 650 billion liters of water have been produced from approximately 22,000 wells. Infiltration impoundments are used widely to dispose of by-product water from coal bed natural gas (CBNG) production, but their hydrogeologic effects are poorly understood. Helicopter electromagnetic surveys (HEM) were completed in July 2003 and July 2004 to characterize the hydrogeology of an alluvial aquifer along the Powder River. The aquifer is receiving CBNG produced water discharge from infiltration impoundments. HEM data were subjected to Occam's inversion algorithms to determine the aquifer bulk conductivity, which was then correlated towater salinity using site-specific sampling results. The HEM data provided high-resolution images of salinity levels in the aquifer, a result not attainable using traditional sampling methods. Interpretation of these images reveals clearly the produced water influence on aquifer water quality. Potential shortfalls to this method occur where there is no significant contrast in aquifer salinity and infiltrating produced water salinity and where there might be significant changes in aquifer lithology. Despite these limitations, airborne geophysical methods can provide a broadscale (watershed-scale) tool to evaluate CBNG water disposal, especially in areas where field-based investigations are logistically prohibitive. This research has implications for design and location strategies of future CBNG water surface disposal facilities within the Powder River Basin. ?? 2008 2008 Society of ExplorationGeophysicists. All rights reserved.
Feasibility of producing cast-refractory metal-fiber superalloy composites
NASA Technical Reports Server (NTRS)
Mcintyre, R. D.
1973-01-01
A study was conducted to evaluate the feasibility of direct casting as a practical method for producing cast superalloy tungsten or columbium alloy fiber composites while retaining a high percentage of fiber strength. Fourteen nickel base, four cobalt, and three iron based matrices were surveyed for their degree of reaction with the metal fibers. Some stress-rupture results were obtained at temperatures of 760, 816, 871, and 1093 C for a few composite systems. The feasibility of producing acceptable composites of some cast nickel, cobalt, and iron matrix alloys with tungsten or columbium alloy fibers was demonstrated.
Phenotypic variability and selection of lipid-producing microalgae in a microfluidic centrifuge
NASA Astrophysics Data System (ADS)
Estévez-Torres, André.; Mestler, Troy; Austin, Robert H.
2010-03-01
Isogenic cells are known to display various expression levels that may result in different phenotypes within a population. Here we focus on the phenotypic variability of a species of unicellular algae that produce neutral lipids. Lipid-producing algae are one of the most promising sources of biofuel. We have implemented a simple microfluidic method to assess lipid-production variability in a population of algae that relays on density differences. We will discuss the reasons of this variability and address the promising avenues of this technique for directing the evolution of algae towards high lipid productivity.
System for producing chroma signals
NASA Technical Reports Server (NTRS)
Vorhaben, K. H.; Lipoma, P. C. (Inventor)
1977-01-01
A method for obtaining electronic chroma signals with a single scanning-type image device is described. A color multiplexed light signal is produced using an arrangement of dichroic filter stripes. In the particular system described, a two layer filter is used to color modulate external light which is then detected by an image pickup tube. The resulting time division multiplexed electronic signal from the pickup tube is converted by a decoder into a green color signal, and a single red-blue multiplexed signal, which is demultiplexed to produce red and blue color signals. The three primary color signals can be encoded as standard NTSC color signals.
Methods of producing transportation fuel
Nair, Vijay [Katy, TX; Roes, Augustinus Wilhelmus Maria [Houston, TX; Cherrillo, Ralph Anthony [Houston, TX; Bauldreay, Joanna M [Chester, GB
2011-12-27
Systems, methods, and heaters for treating a subsurface formation are described herein. At least one method for producing transportation fuel is described herein. The method for producing transportation fuel may include providing formation fluid having a boiling range distribution between -5.degree. C. and 350.degree. C. from a subsurface in situ heat treatment process to a subsurface treatment facility. A liquid stream may be separated from the formation fluid. The separated liquid stream may be hydrotreated and then distilled to produce a distilled stream having a boiling range distribution between 150.degree. C. and 350.degree. C. The distilled liquid stream may be combined with one or more additives to produce transportation fuel.
Tigecycline activity against metallo-β-lactamase-producing bacteria.
Kumar, Simit; Bandyopadhyay, Maitreyi; Mondal, Soma; Pal, Nupur; Ghosh, Tapashi; Bandyopadhyay, Manas; Banerjee, Parthajit
2013-10-01
[corrected] Treatment of serious life-threatening multi-drug-resistant organisms poses a serious problem due to the limited therapeutic options. Tigecycline has been recently marketed as a broad-spectrum antibiotic with activity against both gram-positive and gram-negative bacteria. Even though many studies have demonstrated the activity of tigecycline against ESBL-producing Enterobacteriaceae, its activity is not well-defined against micro-organisms producing metallo-β-lactamases (MBLs), as there are only a few reports and the number of isolates tested is limited. The aim of the present study was to evaluate the activity of tigecycline against MBL-producing bacterial isolates. The isolates were tested for MBL production by (i) combined-disk test, (ii) double disc synergy test (DDST), (iii) susceptibility to aztreonam (30 μg) disk. Minimum inhibitory concentration to tigecycline was determined according to agar dilution method as per Clinical Laboratory Standards Institute (CLSI) guidelines. Disc diffusion susceptibility testing was also performed for all these isolates using tigecycline (15 μg) discs. Among the total 308 isolates included in the study, 99 were found to be MBL producers. MBL production was observed mostly in isolates from pus samples (40.47%) followed by urine (27.4%) and blood (13.09%). MBL production was observed in E. coli (41.48%), K. pneumoniae (26.67%), Proteus mirabilis (27.78%), Citrobacter spp. (41.67%), Enterobacter spp. (25.08%), and Acinetobacter spp. (27.27%). The result showed that tigecycline activity was unaffected by MBL production and it was showed almost 100% activity against all MBL-producing isolates, with most of the isolates exhibiting an MIC ranging from 0.25-8 μg/ml, except 2 MBL-producing E. coli isolates who had an MIC of 8 μg/ml. To conclude, tigecycline was found to be highly effective against MBL-producing Enterobacteriaceae and acinetobacter isolates, but the presence of resistance among organisms, even before the mass usage of the drug, warrants the need of its usage as a reserve drug. The study also found that the interpretative criteria for the disc diffusion method, recommended by the FDA, correlates well with the MIC detection methods. So, the microbiology laboratories might use the relatively easier method of disc diffusion, as compared to the comparatively tedious method of MIC determination.
Lopez-Haro, S. A.; Leija, L.
2016-01-01
Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801
Vaieretti, María Victoria; Díaz, Sandra; Vile, Denis; Garnier, Eric
2007-01-01
Background and Aims Leaf dry matter content (LDMC) is widely used as an indicator of plant resource use in plant functional trait databases. Two main methods have been proposed to measure LDMC, which basically differ in the rehydration procedure to which leaves are subjected after harvesting. These are the ‘complete rehydration’ protocol of Garnier et al. (2001, Functional Ecology 15: 688–695) and the ‘partial rehydration’ protocol of Vendramini et al. (2002, New Phytologist 154: 147–157). Methods To test differences in LDMC due to the use of different methods, LDMC was measured on 51 native and cultivated species representing a wide range of plant families and growth forms from central-western Argentina, following the complete rehydration and partial rehydration protocols. Key Results and Conclusions The LDMC values obtained by both methods were strongly and positively correlated, clearly showing that LDMC is highly conserved between the two procedures. These trends were not altered by the exclusion of plants with non-laminar leaves. Although the complete rehydration method is the safest to measure LDMC, the partial rehydration procedure produces similar results and is faster. It therefore appears as an acceptable option for those situations in which the complete rehydration method cannot be applied. Two notes of caution are given for cases in which different datasets are compared or combined: (1) the discrepancy between the two rehydration protocols is greatest in the case of high-LDMC (succulent or tender) leaves; (2) the results suggest that, when comparing many studies across unrelated datasets, differences in the measurement protocol may be less important than differences among seasons, years and the quality of local habitats. PMID:17353207
NASA Astrophysics Data System (ADS)
Owens, A. R.; Kópházi, J.; Eaton, M. D.
2017-12-01
In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.
Bouton, Joseph H; Wood, Donald T
2012-11-27
A switchgrass cultivar designated EG1101 is disclosed. Also disclosed are seeds of switchgrass cultivar EG1101, plants of switchgrass EG1101, plant parts of switchgrass cultivar EG1101 and methods for producing a switchgrass plant produced by crossing switchgrass cultivar EG1101 with itself or with another switchgrass variety. Methods are also described for producing a switchgrass plant containing in its genetic material one or more transgenes and to the transgenic switchgrass plants and plant parts produced by those methods. Switchgrass cultivars or breeding cultivars and plant parts derived from switchgrass variety EG1101, methods for producing other switchgrass cultivars, lines or plant parts derived from switchgrass cultivar EG1101 and the switchgrass plants, varieties, and their parts derived from use of those methods are described herein. Hybrid switchgrass seeds, plants and plant parts produced by crossing the cultivar EG1101 with another switchgrass cultivar are also described.
Bouton, Joseph H; Wood, Donald T
2012-11-20
A switchgrass cultivar designated EG1102 is disclosed. The invention relates to the seeds of switchgrass cultivar EG1102, to the plants of switchgrass EG1102, to plant parts of switchgrass cultivar EG1102 and to methods for producing a switchgrass plant produced by crossing switchgrass cultivar EG1102 with itself or with another switchgrass variety. The invention also relates to methods for producing a switchgrass plant containing in its genetic material one or more transgenes and to the transgenic switchgrass plants and plant parts produced by those methods. This invention also relates to switchgrass cultivars or breeding cultivars and plant parts derived from switchgrass variety EG1102, to methods for producing other switchgrass cultivars, lines or plant parts derived from switchgrass cultivar EG1102 and to the switchgrass plants, varieties, and their parts derived from use of those methods. The invention further relates to hybrid switchgrass seeds, plants and plant parts produced by crossing the cultivar EG1102 with another switchgrass cultivar.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
3D Visualization of Machine Learning Algorithms with Astronomical Data
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2016-01-01
We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
Method for producing nanowire-polymer composite electrodes
Pei, Qibing; Yu, Zhibin
2017-11-21
A method for producing flexible, nanoparticle-polymer composite electrodes is described. Conductive nanoparticles, preferably metal nanowires or nanotubes, are deposited on a smooth surface of a platform to produce a porous conductive layer. A second application of conductive nanoparticles or a mixture of nanoparticles can also be deposited to form a porous conductive layer. The conductive layer is then coated with at least one coating of monomers that is polymerized to form a conductive layer-polymer composite film. Optionally, a protective coating can be applied to the top of the composite film. In one embodiment, the monomer coating includes light transducing particles to reduce the total internal reflection of light through the composite film or pigments that absorb light at one wavelength and re-emit light at a longer wavelength. The resulting composite film has an active side that is smooth with surface height variations of 100 nm or less.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolowski, J.; Rosinski, M.; Badziak, J.
2008-03-19
This work reports experiments concerning specific application of laser-produced plasma at IPPLM in Warsaw. A repetitive pulse laser system of parameters: energy up to 0.8 J in a 3.5 ns-pulse, wavelength of 1.06 {mu}m, repetition rate of up to 10 Hz, has been employed in these investigations. The characterisation of laser-produced plasma was performed with the use of 'time-of-flight' ion diagnostics simultaneously with other diagnostic methods. The results of laser-matter interaction were obtained in dependence on laser pulse parameters, illumination geometry and target material. The modified SiO{sub 2} layers and sample surface properties were characterised with the use of differentmore » methods at the Middle-East Technological University in Ankara and at the Warsaw University of technology. The production of the Ge nanocrystallites has been demonstrated for annealed samples prepared in different experimental conditions.« less
Method of phase space beam dilution utilizing bounded chaos generated by rf phase modulation
Pham, Alfonse N.; Lee, S. Y.; Ng, K. Y.
2015-12-10
This paper explores the physics of chaos in a localized phase-space region produced by rf phase modulation applied to a double rf system. The study can be exploited to produce rapid particle bunch broadening exhibiting longitudinal particle distribution uniformity. Hamiltonian models and particle-tracking simulations are introduced to understand the mechanism and applicability of controlled particle diffusion. When phase modulation is applied to the double rf system, regions of localized chaos are produced through the disruption and overlapping of parametric resonant islands and configured to be bounded by well-behaved invariant tori to prevent particle loss. The condition of chaoticity and themore » degree of particle dilution can be controlled by the rf parameters. As a result, the method has applications in alleviating adverse space-charge effects in high-intensity beams, particle bunch distribution uniformization, and industrial radiation-effects experiments.« less
Journal of Naval Science. Volume 2, Number 1
1976-01-01
has defined a probability distribution function which fits this type of data and forms the basis for statistical analysis of test results (see...Conditions to Assess the Performance of Fire-Resistant Fluids’. Wear, 28 (1974) 29. J.N.S., Vol. 2, No. 1 APPENDIX A Analysis of Fatigue Test Data...used to produce the impulse response and the equipment required for the analysis is relatively simple. The methods that must be used to produce
Production of mycotoxins by filamentous fungi in untreated surface water.
Oliveira, Beatriz R; Mata, Ana T; Ferreira, João P; Barreto Crespo, Maria T; Pereira, Vanessa J; Bronze, Maria R
2018-04-16
Several research studies reported that mycotoxins and other metabolites can be produced by fungi in certain matrices such as food. In recent years, attention has been drawn to the wide occurrence and identification of fungi in drinking water sources. Due to the large demand of water for drinking, watering, or food production purposes, it is imperative that further research is conducted to investigate if mycotoxins may be produced in water matrices. This paper describes the results obtained when a validated analytical method was applied to detect and quantify the presence of mycotoxins as a result of fungi inoculation and growth in untreated surface water. Aflatoxins B1 and B2, fumonisin B3, and ochratoxin A were detected at concentrations up to 35 ng/L. These results show that fungi can produce mycotoxins in water matrices in a non-negligible quantity and, as such, attention must be given to the presence of fungi in water.
Reznik, Gabriel O.; Sano, Takeshi; Vajda, Sandor; Smith, Cassandra; Cantor, Charles
2002-01-01
Compounds and methods are described for producing streptavidin mutants with changed affinities. In particular, modifications to the sequence of the natural streptavidin gene is described to create amino acid substitutions resulting in greater affinity for biotin substitutes than for biotin.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Improved astigmatic focus error detection method
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.
1992-01-01
All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.
Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, E. L.; Molod, A.
2014-01-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-03-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
The synthesis method for design of electron flow sources
NASA Astrophysics Data System (ADS)
Alexahin, Yu I.; Molodozhenzev, A. Yu
1997-01-01
The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
A low-cost, high-yield fabrication method for producing optimized biomimetic dry adhesives
NASA Astrophysics Data System (ADS)
Sameoto, D.; Menon, C.
2009-11-01
We present a low-cost, large-scale method of fabricating biomimetic dry adhesives. This process is useful because it uses all photosensitive polymers with minimum fabrication costs or complexity to produce molds for silicone-based dry adhesives. A thick-film lift-off process is used to define molds using AZ 9260 photoresist, with a slow acting, deep UV sensitive material, PMGI, used as both an adhesion promoter for the AZ 9260 photoresist and as an undercutting material to produce mushroom-shaped fibers. The benefits to this process are ease of fabrication, wide range of potential layer thicknesses, no special surface treatment requirements to demold silicone adhesives and easy stripping of the full mold if process failure does occur. Sylgard® 184 silicone is used to cast full sheets of biomimetic dry adhesives off 4" diameter wafers, and different fiber geometries are tested for normal adhesion properties. Additionally, failure modes of the adhesive during fabrication are noted and strategies for avoiding these failures are discussed. We use this fabrication method to produce different fiber geometries with varying cap diameters and test them for normal adhesion strengths. The results indicate that the cap diameters relative to post diameters for mushroom-shaped fibers dominate the adhesion properties.
Bjornsdottir-Butler, Kristin; Jones, Jessica L; Benner, Ronald; Burkhardt, William
2011-05-01
Prompt detection of bacteria that contribute to scombrotoxin (histamine) fish poisoning can aid in the detection of potentially toxic fish products and prevent the occurrence of illness. We report development of the first real-time PCR method for rapid detection of Gram-negative histamine-producing bacteria (HPB) in fish. The real-time PCR assay was 100% inclusive for detecting high-histamine producing isolates and did not detect any of the low- or non-histamine producing isolates. The efficiency of the assay with/without internal amplification control ranged from 96-104% and in the presence of background flora and inhibitory matrices was 92/100% and 73-96%, respectively. This assay was used to detect HPB from naturally contaminated yellowfin tuna, bluefish, and false albacore samples. Photobacterium damselae (8), Plesiomonas shigelloides (2), Shewanella sp. (1), and Morganella morganii (1) were subsequently isolated from the real-time PCR positive fish samples. These results indicate that the real-time PCR assay developed in this study is a rapid and sensitive method for detecting high-HPB. The assay may be adapted for quantification of HPB, either directly or with an MPN-PCR method. Copyright © 2010. Published by Elsevier Ltd.
Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses
Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah
2015-01-01
Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481
RAYNOR, HOLLIE A.; OSTERHOLT, KATHRIN M.; HART, CHANTELLE N.; JELALIAN, ELISSA; VIVIER, PATRICK; WING, RENA R.
2016-01-01
Objective Evaluate enrollment numbers, randomization rates, costs, and cost-effectiveness of active versus passive recruitment methods for parent-child dyads into two pediatric obesity intervention trials. Methods Recruitment methods were categorized into active (pediatrician referral and targeted mailings, with participants identified by researcher/health care provider) versus passive methods (newspaper, bus, internet, television, and earning statements; fairs/community centers/schools; and word of mouth; with participants self-identified). Numbers of enrolled and randomized families and costs/recruitment method were monitored throughout the 22-month recruitment period. Costs (in USD) per recruitment method included staff time, mileage, and targeted costs of each method. Results A total of 940 families were referred or made contact, with 164 families randomized (child: 7.2±1.6 years, 2.27±0.61 standardized body mass index [zBMI], 86.6% obese, 61.7% female, 83.5% white; parent: 38.0±5.8 years, 32.9±8.4 BMI, 55.2% obese, 92.7% female, 89.6% white). Pediatrician referral, followed by targeted mailings, produced the largest number of enrolled and randomized families (both methods combined producing 87.2% of randomized families). Passive recruitment methods yielded better retention from enrollment to randomization (p <0.05), but produced few families (21 in total). Approximately $91 000 was spent on recruitment, with cost per randomized family at $554.77. Pediatrician referral was the most cost-effective method, $145.95/randomized family, but yielded only 91 randomized families over 22-months of continuous recruitment. Conclusion Pediatrician referral and targeted mailings, which are active recruitment methods, were the most successful strategies. However, recruitment demanded significant resources. Successful recruitment for pediatric trials should use several strategies. Clinical Trials Registration: NCT00259324, NCT00200265 PMID:19922036
Tuna, Süleyman Hakan; Özçiçek Pekmez, Nuran; Kürkçüoğlu, Işin
2015-11-01
The effects of fabrication methods on the corrosion resistance of frameworks produced with Co-Cr alloys are not clear. The purpose of this in vitro study was to evaluate the electrochemical corrosion resistance of Co-Cr alloy specimens that were fabricated by conventional casting, milling, and laser sintering. The specimens fabricated with 3 different methods were investigated by potentiodynamic tests and electrochemical impedance spectroscopy in an artificial saliva. Ions released into the artificial saliva were estimated with inductively coupled plasma-mass spectrometry, and the results were statistically analyzed. The specimen surfaces were investigated with scanning electron microscopy before and after the tests. In terms of corrosion current and Rct properties, statistically significant differences were found both among the means of the methods and among the means of the material groups (P<.05). With regard to ions released, a statistically significant difference was found among the material groups (P<.05); however, no difference was found among the methods. Scanning electron microscopic imaging revealed that the specimens produced by conventional casting were affected to a greater extent by etching and electrochemical corrosion than those produced by milling and laser sintering. The corrosion resistance of a Co-Cr alloy specimens fabricated by milling or laser sintering was greater than that of the conventionally cast alloy specimens. The Co-Cr specimens produced by the same method also differed from one another in terms of corrosion resistance. These differences may be related to the variations in the alloy compositions. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Andryani, Diyah Septi; Bustamam, Alhadi; Lestari, Dian
2017-03-01
Clustering aims to classify the different patterns into groups called clusters. In this clustering method, we use n-mers frequency to calculate the distance matrix which is considered more accurate than using the DNA alignment. The clustering results could be used to discover biologically important sub-sections and groups of genes. Many clustering methods have been developed, while hard clustering methods considered less accurate than fuzzy clustering methods, especially if it is used for outliers data. Among fuzzy clustering methods, fuzzy c-means is one the best known for its accuracy and simplicity. Fuzzy c-means clustering uses membership function variable, which refers to how likely the data could be members into a cluster. Fuzzy c-means clustering works using the principle of minimizing the objective function. Parameters of membership function in fuzzy are used as a weighting factor which is also called the fuzzier. In this study we implement hybrid clustering using fuzzy c-means and divisive algorithm which could improve the accuracy of cluster membership compare to traditional partitional approach only. In this study fuzzy c-means is used in the first step to find partition results. Furthermore divisive algorithms will run on the second step to find sub-clusters and dendogram of phylogenetic tree. To find the best number of clusters is determined using the minimum value of Davies Bouldin Index (DBI) of the cluster results. In this research, the results show that the methods introduced in this paper is better than other partitioning methods. Finally, we found 3 clusters with DBI value of 1.126628 at first step of clustering. Moreover, DBI values after implementing the second step of clustering are always producing smaller IDB values compare to the results of using first step clustering only. This condition indicates that the hybrid approach in this study produce better performance of the cluster results, in term its DBI values.
Development of an Improved Mammalian Overexpression Method for Human CD62L
Brown, Haley A.; Roth, Gwynne; Holzapfel, Genevieve; Shen, Sarek; Rahbari, Kate; Ireland, Joanna; Zou, Zhongcheng; Sun, Peter D.
2014-01-01
We have previously developed a glutamine synthetase (GS)-based mammalian recombinant protein expression system that is capable of producing 5 to 30 mg/L recombinant proteins. The over expression is based on multiple rounds of target gene amplification driven by methionine sulfoximine (MSX), an inhibitor of glutamine synthetase. However, like other stable mammalian over expression systems, a major shortcoming of the GS-based expression system is its lengthy turn-around time, typically taking 4–6 months to produce. To shorten the construction time, we replaced the muti-round target gene amplifications with single-round in situ amplifications, thereby shortening the cell line construction to 2 months. The single-round in situ amplification method resulted in highest recombinant CD62L expressing CHO cell lines producing ~5mg/L soluble CD62L, similar to those derived from the multi-round amplification and selection method. In addition, we developed a MSX resistance assay as an alternative to utilizing ELISA for evaluating the expression level of stable recombinant CHO cell lines. PMID:25286402
Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions
2015-01-01
We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856
Hydrolysis of biomass material
Schmidt, Andrew J.; Orth, Rick J.; Franz, James A.; Alnajjar, Mikhail
2004-02-17
A method for selective hydrolysis of the hemicellulose component of a biomass material. The selective hydrolysis produces water-soluble small molecules, particularly monosaccharides. One embodiment includes solubilizing at least a portion of the hemicellulose and subsequently hydrolyzing the solubilized hemicellulose to produce at least one monosaccharide. A second embodiment includes solubilizing at least a portion of the hemicellulose and subsequently enzymatically hydrolyzing the solubilized hemicellulose to produce at least one monosaccharide. A third embodiment includes solubilizing at least a portion of the hemicellulose by heating the biomass material to greater than 110.degree. C. resulting in an aqueous portion that includes the solubilized hemicellulose and a water insoluble solids portion and subsequently separating the aqueous portion from the water insoluble solids portion. A fourth embodiment is a method for making a composition that includes cellulose, at least one protein and less than about 30 weight % hemicellulose, the method including solubilizing at least a portion of hemicellulose present in a biomass material that also includes cellulose and at least one protein and subsequently separating the solubilized hemicellulose from the cellulose and at least one protein.
Riley, Paul W; Gallea, Benoit; Valcour, Andre
2017-01-01
Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process.
Grid adaption for bluff bodies
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1986-01-01
Methods of grid adaptation are reviewed and a method is developed with the capability of adaptation to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method was formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method and a movie was produced which shows simulataneously the transient behavior of the solution and the grid adaptation. The results are compared with the experimental and other numerical results.
Laser processing for manufacturing nanocarbon materials
NASA Astrophysics Data System (ADS)
Van, Hai Hoang
CNTs have been considered as the excellent candidate to revolutionize a broad range of applications. There have been many method developed to manipulate the chemistry and the structure of CNTs. Laser with non-contact treatment capability exhibits many processing advantages, including solid-state treatment, extremely fast processing rate, and high processing resolution. In addition, the outstanding monochromatic, coherent, and directional beam generates the powerful energy absorption and the resultant extreme processing conditions. In my research, a unique laser scanning method was developed to process CNTs, controlling the oxidation and the graphitization. The achieved controllability of this method was applied to address the important issues of the current CNT processing methods for three applications. The controllable oxidation of CNTs by laser scanning method was applied to cut CNT films to produce high-performance cathodes for FE devices. The production method includes two important self-developed techniques to produce the cold cathodes: the production of highly oriented and uniformly distributed CNT sheets and the precise laser trimming process. Laser cutting is the unique method to produce the cathodes with remarkable features, including ultrathin freestanding structure (~200 nm), greatly high aspect ratio, hybrid CNT-GNR emitter arrays, even emitter separation, and directional emitter alignment. This unique cathode structure was unachievable by other methods. The developed FE devices successfully solved the screening effect issue encounter by current FE devices. The laser-control oxidation method was further developed to sequentially remove graphitic walls of CNTs. The laser oxidation process was directed to occur along the CNT axes by the laser scanning direction. Additionally, the oxidation was further assisted by the curvature stress and the thermal expansion of the graphitic nanotubes, ultimately opening (namely unzipping) the tubular structure to produce GNRs. Therefore the developed laser scanning method optimally exploited the thermal laser-CNT interaction, successfully transforming CNTs into 2D GNRs. The solid-state laser unzipping process effectively addressed the issues of contamination and scalability encountered by the current unzipping methods. Additionally, the produced GNRs were uniquely featured with the freestanding structure and the smooth surfaces. If the scanning process was performed in an inert environment without the appearance of oxygen, the oxidation of CNTs would not happen. Instead, the greatly mobile carbon atoms of the heated CNTs would reorganize the crystal structure, inducing the graphitization process to improve the crystallinity. Many observations showing the structural improvement of CNTs under laser irradiation has been reported, confirming the capability of laser to heal graphitic defects. Laser methods were more time-efficient and energy-efficient than other annealing methods because laser can quickly heat CNTs to generate graphitization in less than one second. This subsecond heating process of laser irradiation was more effective than other heating methods because it avoided the undesired coalescence of CNTs. In my research, the laser scanning method was applied to generate the graphitization, healing the structural defects of CNTs. Different from the reported laser methods, the laser scanning directed the locally annealed areas to move along the CNT axes, migrating and coalescencing the graphitic defects to achieve better healing results. The critical information describing the CNT structural transformation caused by the moving laser irradiation was explored from the successful applications of the developed laser method. This knowledge inspires an important method to modifiy the general graphitic structure for important applications, such as carbon fiber production, CNT self-assembly process and CNT welding. This method will be effective, facile, versatile, and adaptable for laboratory and industrial facilities.
Laser notching ceramics for reliable fracture toughness testing
Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...
2015-09-19
A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less
Ramasamy, Thilagavathi; Selvam, Chelliah
2015-10-15
Virtual screening has become an important tool in drug discovery process. Structure based and ligand based approaches are generally used in virtual screening process. To date, several benchmark sets for evaluating the performance of the virtual screening tool are available. In this study, our aim is to compare the performance of both structure based and ligand based virtual screening methods. Ten anti-cancer targets and their corresponding benchmark sets from 'Demanding Evaluation Kits for Objective In silico Screening' (DEKOIS) library were selected. X-ray crystal structures of protein-ligand complexes were selected based on their resolution. Openeye tools such as FRED, vROCS were used and the results were carefully analyzed. At EF1%, vROCS produced better results but at EF5% and EF10%, both FRED and ROCS produced almost similar results. It was noticed that the enrichment factor values were decreased while going from EF1% to EF5% and EF10% in many cases. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Allamandola, L. J.; Tielens, G. G. M.; Barker, J. R.
1989-01-01
A comprehensive study of the PAH hypothesis is presented, including the interstellar, IR spectral features which have been attributed to emission from highly vibrationally excited PAHs. Spectroscopic and IR emission features are discussed in detail. A method for calculating the IR fluorescence spectrum from a vibrationally excited molecule is described. Analysis of interstellar spectrum suggests that the PAHs which dominate the IR spectra contain between 20 and 40 C atoms. The results are compared with results from a thermal approximation. It is found that, for high levels of vibrational excitation and emission from low-frequency modes, the two methods produce similar results. Also, consideration is given to the relationship between PAH molecules and amorphous C particles, the most likely interstellar PAH molecular structures, the spectroscopic structure produced by PAHs and PAH-related materials in the UV portion of the interstellar extinction curve, and the influence of PAH charge on the UV, visible, and IR regions.
Lu, Zhen; McKellop, Harry A
2014-03-01
This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.
Routh, V H; Helke, C J
1997-02-01
Antibody-coated microprobes are used to measure neuropeptide release in the central nervous system. Although they are not quantitative, they provide the most precise spatial resolution of the location of in vivo release of any currently available method. Previous methods of coating antibody microprobes are difficult and time-consuming. Moreover, using these methods we were unable to produce evenly coated antibody microprobes. This paper describes a novel method for the production of antibody microprobes using thiol-terminal silanes and the heterobifunctional crosslinker, 4-(4-N-maleimidophenyl)butyric acid hydrazide HCl 1/2 dioxane (MPBH). Following silation, glass micropipettes are incubated with antibody to substance P (SP) that has been conjugated to MPBH. This method results in a dense, even coating of antibody without decreasing the biological activity of the antibody. Additionally, this method takes considerably less time than previously described methods without sacrificing the use of antibody microprobes as micropipettes. The sensitivity of the microprobes for SP is in the picomolar range, and there is a linear correlation between the log of SP concentration (M) and B/B0 (r2 = 0.98). The microprobes are stable for up to 3 weeks when stored in 0.1 M sodium phosphate buffer with 50 mM NaCl (pH 7.4) at 5 degrees C. Finally, insertion into the exposed spinal cord of an anesthetized rat for 15 min produces no damage to the antibody coating.
NASA Astrophysics Data System (ADS)
Ghanei, S.; Kashefi, M.; Mazinani, M.
2014-04-01
The magnetic properties of ferrite-martensite dual-phase steels were evaluated using eddy current and Barkhausen noise nondestructive testing methods and correlated with their microstructural changes. Several routes were used to produce different microstructures of dual-phase steels. The first route was different heat treatments in γ region to vary the ferrite grain size (from 9.47 to 11.12 in ASTM number), and the second one was variation in intercritical annealing temperatures (from 750 to 890 °C) in order to produce different percentages of martensite in dual-phase microstructure. The results concerning magnetic Barkhausen noise are discussed in terms of height, position and shape of Barkhausen noise profiles, taking into account two main aspects: ferrite grain size, and different percentages of martensite. Then, eddy current testing was used to study the mentioned microstructural changes by detection of impedance variations. The obtained results show that microstructural changes have a noticeable effect on the magnetic properties of dual-phase steels. The results reveal that both magnetic methods have a high potential to be used as a reliable nondestructive tool to detect and monitor microstructural changes occurring during manufacturing of dual-phase steels.
The Path Resistance Method for Bounding the Smallest Nontrivial Eigenvalue of a Laplacian
NASA Technical Reports Server (NTRS)
Guattery, Stephen; Leighton, Tom; Miller, Gary L.
1997-01-01
We introduce the path resistance method for lower bounds on the smallest nontrivial eigenvalue of the Laplacian matrix of a graph. The method is based on viewing the graph in terms of electrical circuits; it uses clique embeddings to produce lower bounds on lambda(sub 2) and star embeddings to produce lower bounds on the smallest Rayleigh quotient when there is a zero Dirichlet boundary condition. The method assigns priorities to the paths in the embedding; we show that, for an unweighted tree T, using uniform priorities for a clique embedding produces a lower bound on lambda(sub 2) that is off by at most an 0(log diameter(T)) factor. We show that the best bounds this method can produce for clique embeddings are the same as for a related method that uses clique embeddings and edge lengths to produce bounds.
Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao
2014-01-01
While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253
Validation of the DRACO Particle-in-Cell Code using Busek 200W Hall Thruster Experimental Data
2008-07-23
and where sputtered material will be deposited on a spacecraft. 15 1.4: Thesis Overview The primary objective of this thesis is to use the DRACO...the second derivative of a fuction is given in Equation 2.6. These equations are calculated at a node and use the value at the node, kjif ,, , as...determine how the results different and which field solving method produces the best results. 4.3.3: Collision Model There are two primary methods
Kanamori, Hajime; Rutala, William A; Gergen, Maria F; Sickbert-Bennett, Emily E; Weber, David J
2018-05-07
Susceptibility to germicides for carbapenem/colistin-resistant Enterobacteriaceae is poorly described. We investigated the efficacy of multiple germicides against these emerging antibiotic-resistant pathogens using the disc-based quantitative carrier test method that can produce results more similar to those encountered in healthcare settings than a suspension test. Our study results demonstrated that germicides commonly used in healthcare facilities likely will be effective against carbapenem/colistin-resistant Enterobacteriaceae when used appropriately in healthcare facilities. Copyright © 2018 American Society for Microbiology.
Corrosion prevention of magnesium surfaces via surface conversion treatments using ionic liquids
Qu, Jun; Luo, Huimin
2016-09-06
A method for conversion coating a magnesium-containing surface, the method comprising contacting the magnesium-containing surface with an ionic liquid compound under conditions that result in decomposition of the ionic liquid compound to produce a conversion coated magnesium-containing surface having a substantially improved corrosion resistance relative to the magnesium-containing surface before said conversion coating. Also described are the resulting conversion-coated magnesium-containing surface, as well as mechanical components and devices containing the conversion-coated magnesium-containing surface.
Residual gravimetric method to measure nebulizer output.
Vecellio None, Laurent; Grimbert, Daniel; Bordenave, Joelle; Benoit, Guy; Furet, Yves; Fauroux, Brigitte; Boissinot, Eric; De Monte, Michele; Lemarié, Etienne; Diot, Patrice
2004-01-01
The aim of this study was to assess a residual gravimetric method based on weighing dry filters to measure the aerosol output of nebulizers. This residual gravimetric method was compared to assay methods based on spectrophotometric measurement of terbutaline (Bricanyl, Astra Zeneca, France), high-performance liquid chromatography (HPLC) measurement of tobramycin (Tobi, Chiron, U.S.A.), and electrochemical measurements of NaF (as defined by the European standard). Two breath-enhanced jet nebulizers, one standard jet nebulizer, and one ultrasonic nebulizer were tested. Output produced by the residual gravimetric method was calculated by weighing the filters both before and after aerosol collection and by filter drying corrected by the proportion of drug contained in total solute mass. Output produced by the electrochemical, spectrophotometric, and HPLC methods was determined after assaying the drug extraction filter. The results demonstrated a strong correlation between the residual gravimetric method (x axis) and assay methods (y axis) in terms of drug mass output (y = 1.00 x -0.02, r(2) = 0.99, n = 27). We conclude that a residual gravimetric method based on dry filters, when validated for a particular agent, is an accurate way of measuring aerosol output.
NASA Astrophysics Data System (ADS)
Tański, Tomasz; Matysiak, Wiktor; Krzemiński, Łukasz; Jarka, Paweł; Gołombek, Klaudiusz
2017-12-01
The aim of the research was to create thin, nanofibrous composite mats with a polyvinylpyrrolidone (PVP) matrix, with the reinforcing phase in the form of silicon oxide (SiO2) nanoparticles. SiO2 nanopowder was obtained using the zol-gel method with a mixture of tetraethyl orthosilicate (TEOS, Si (OC2H5)), hydrochloric acid (HCl), ethanol (C3H5OH) and distilled water. The produced colloidal suspension was subjected to a drying process and a calcination process at 550 °C, resulting in an amorphous silica nanopowder with an average particle diameter of 20 nm. The morphology and structure of the manufactured SiO2 nanoparticles was tested using transmission electron microscopy (TEM) and X-ray diffraction analysis (XRD). Then, using the electrospinning method with a 15% (weight) solution of PVP in ethanol and a 15% solution of PVP/EtOH containing the produced nanoparticles equivalent to 5% of the mass concentration relative to the polymer matrix, polymer PVP nanofibres and PVP/SiO2 composite nanofibres/SiO2 nanoparticles were produced. The morphology and chemical composition of the produced polymer and composite nanofibres were tested using a scanning electron microscope (SEM) with an energy dispersive spectrometer (EDS). The analysis of the impact of the reinforcing phase on the absorption of electromagnetic radiation was conducted on the basis of UV-vis spectra, based on which the rated values of band gaps of the produced thin fibrous mats were assessed.
Etching radical controlled gas chopped deep reactive ion etching
Olynick, Deidre; Rangelow, Ivo; Chao, Weilun
2013-10-01
A method for silicon micromachining techniques based on high aspect ratio reactive ion etching with gas chopping has been developed capable of producing essentially scallop-free, smooth, sidewall surfaces. The method uses precisely controlled, alternated (or chopped) gas flow of the etching and deposition gas precursors to produce a controllable sidewall passivation capable of high anisotropy. The dynamic control of sidewall passivation is achieved by carefully controlling fluorine radical presence with moderator gasses, such as CH.sub.4 and controlling the passivation rate and stoichiometry using a CF.sub.2 source. In this manner, sidewall polymer deposition thicknesses are very well controlled, reducing sidewall ripples to very small levels. By combining inductively coupled plasmas with controlled fluorocarbon chemistry, good control of vertical structures with very low sidewall roughness may be produced. Results show silicon features with an aspect ratio of 20:1 for 10 nm features with applicability to nano-applications in the sub-50 nm regime. By comparison, previous traditional gas chopping techniques have produced rippled or scalloped sidewalls in a range of 50 to 100 nm roughness.
Development of a Solid-State Fermentation System for Producing Bioethanol from Food Waste
NASA Astrophysics Data System (ADS)
Honda, Hiroaki; Ohnishi, Akihiro; Fujimoto, Naoshi; Suzuki, Masaharu
Liquid fermentation is the a conventional method of producing bioethanol. However, this method results in the formation of high concentrations waste after distillation and futher treatment requires more energy and is costly(large amounts of costly energy).Saccharification of dried raw garbage was tested for 12 types of Koji starters under the following optimum culture conditions: temperature of 30°C and initial moisture content of 50%.Among all the types, Aspergillus oryzae KBN650 had the highest saccharifying power. The ethanol-producing ability of the raw garbage was investigated for 72 strains of yeast, of which Saccharomyces cerevisiae A30 had the highest ethanol production(yield)under the following optimum conditions: 1 :1 ratio of dried garbage and saccharified garbage by weight, and initial moisture content of 60%. Thus, the solid-state fermentation system consisted of the following 4 processes: moisture control, saccharification, ethanol production and distillation. This system produced 0.6kg of ethanol from 9.6kg of garbage. Moreover the ethanol yield from all sugars was calculated to be 0.37.
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Full-scale aircraft cabin flammability tests of improved fire-resistant materials
NASA Technical Reports Server (NTRS)
Stuckey, R. N.; Surpkis, D. E.; Price, L. J.
1974-01-01
Full-scale aircraft cabin flammability tests to evaluate the effectiveness of new fire-resistant materials by comparing their burning characteristics with those of older aircraft materials are described. Three tests were conducted and are detailed. Test 1, using pre-1968 materials, was run to correlate the procedures and to compare the results with previous tests by other organizations. Test 2 included newer, improved fire-resistant materials. Test 3 was essentially a duplicate of test 2, but a smokeless fuel was used. Test objectives, methods, materials, and results are presented and discussed. Results indicate that the pre-1968 materials ignited easily, allowed the fire to spread, produced large amounts of smoke and toxic combustion products, and resulted in a flash fire and major fire damage. The newer fire-resistant materials did not allow the fire to spread. Furthermore, they produced less, lower concentrations of toxic combustion products, and lower temperatures. The newer materials did not produce a flash fire.
Sreenivasa, Manish; Millard, Matthew; Felis, Martin; Mombaur, Katja; Wolf, Sebastian I.
2017-01-01
Predicting the movements, ground reaction forces and neuromuscular activity during gait can be a valuable asset to the clinical rehabilitation community, both to understand pathology, as well as to plan effective intervention. In this work we use an optimal control method to generate predictive simulations of pathological gait in the sagittal plane. We construct a patient-specific model corresponding to a 7-year old child with gait abnormalities and identify the optimal spring characteristics of an ankle-foot orthosis that minimizes muscle effort. Our simulations include the computation of foot-ground reaction forces, as well as the neuromuscular dynamics using computationally efficient muscle torque generators and excitation-activation equations. The optimal control problem (OCP) is solved with a direct multiple shooting method. The solution of this problem is physically consistent synthetic neural excitation commands, muscle activations and whole body motion. Our simulations produced similar changes to the gait characteristics as those recorded on the patient. The orthosis-equipped model was able to walk faster with more extended knees. Notably, our approach can be easily tuned to simulate weakened muscles, produces physiologically realistic ground reaction forces and smooth muscle activations and torques, and can be implemented on a standard workstation to produce results within a few hours. These results are an important contribution toward bridging the gap between research methods in computational neuromechanics and day-to-day clinical rehabilitation. PMID:28450833
Flowfield analysis of modern helicopter rotors in hover by Navier-Stokes method
NASA Technical Reports Server (NTRS)
Srinivasan, G. R.; Raghavan, V.; Duque, E. P. N.
1991-01-01
The viscous, three-dimensional, flowfields of UH60 and BERP rotors are calculated for lifting hover configurations using a Navier-Stokes computational fluid dynamics method with a view to understand the importance of planform effects on the airloads. In this method, the induced effects of the wake, including the interaction of tip vortices with successive blades, are captured as a part of the overall flowfield solution without prescribing any wake models. Numerical results in the form of surface pressures, hover performance parameters, surface skin friction and tip vortex patterns, and vortex wake trajectory are presented at two thrust conditions for UH60 and BERP rotors. Comparison of results for the UH60 model rotor show good agreement with experiments at moderate thrust conditions. Comparison of results with equivalent rectangular UH60 blade and BERP blade indicates that the BERP blade, with an unconventional planform, gives more thrust at the cost of more power and a reduced figure of merit. The high thrust conditions considered produce severe shock-induced flow separation for UH60 blade, while the BERP blade develops more thrust and minimal separation. The BERP blade produces a tighter tip vortex structure compared with the UH60 blade. These results and the discussion presented bring out the similarities and differences between the two rotors.
Cloud Computing Techniques for Space Mission Design
NASA Technical Reports Server (NTRS)
Arrieta, Juan; Senent, Juan
2014-01-01
The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.
Process for preparing fine grain silicon carbide powder
Wei, G.C.
Method of producing fine-grain silicon carbide powder comprises combining methyltrimethoxysilane with a solution of phenolic resin, acetone and water or sugar and water, gelling the resulting mixture, and then drying and heating the obtained gel.
Method of producing carbon coated nano- and micron-scale particles
Perry, W. Lee; Weigle, John C; Phillips, Jonathan
2013-12-17
A method of making carbon-coated nano- or micron-scale particles comprising entraining particles in an aerosol gas, providing a carbon-containing gas, providing a plasma gas, mixing the aerosol gas, the carbon-containing gas, and the plasma gas proximate a torch, bombarding the mixed gases with microwaves, and collecting resulting carbon-coated nano- or micron-scale particles.
A Programmatic Description of a Social Skills Group for Young Children with Autism
ERIC Educational Resources Information Center
Leaf, Justin B.; Dotson, Wesley H.; Oppenheim-Leaf, Misty L.; Sherman, James A.; Sheldon, Jan B.
2012-01-01
Deficits in social skills are a common problem for children with autism. One method of developing appropriate social skills in children with autism has been group instruction. To date, however, group instruction has produced mixed results. The purpose of this article is to describe a promising method of teaching social skills to children in small…
Isolation of Nuclei and Nucleoli.
Pendle, Alison F; Shaw, Peter J
2017-01-01
Here we describe methods for producing nuclei from Arabidopsis suspension cultures or root tips of Arabidopsis, wheat, or pea. These methods could be adapted for other species and cell types. The resulting nuclei can be further purified for use in biochemical or proteomic studies, or can be used for microscopy. We also describe how the nuclei can be used to obtain a preparation of nucleoli.