ROI on yield data analysis systems through a business process management strategy
NASA Astrophysics Data System (ADS)
Rehani, Manu; Strader, Nathan; Hanson, Jeff
2005-05-01
The overriding motivation for yield engineering is profitability. This is achieved through application of yield management. The first application is to continually reduce waste in the form of yield loss. New products, new technologies and the dynamic state of the process and equipment keep introducing new ways to cause yield loss. In response, the yield management efforts have to continually come up with new solutions to minimize it. The second application of yield engineering is to aid in accurate product pricing. This is achieved through predicting future results of the yield engineering effort. The more accurate the yield prediction, the more accurate the wafer start volume, the more accurate the wafer pricing. Another aspect of yield prediction pertains to gauging the impact of a yield problem and predicting how long that will last. The ability to predict such impacts again feeds into wafer start calculations and wafer pricing. The question then is that if the stakes on yield management are so high why is it that most yield management efforts are run like science and engineering projects and less like manufacturing? In the eighties manufacturing put the theory of constraints1 into practice and put a premium on stability and predictability in manufacturing activities, why can't the same be done for yield management activities? This line of introspection led us to define and implement a business process to manage the yield engineering activities. We analyzed the best known methods (BKM) and deployed a workflow tool to make them the standard operating procedure (SOP) for yield managment. We present a case study in deploying a Business Process Management solution for Semiconductor Yield Engineering in a high-mix ASIC environment. We will present a description of the situation prior to deployment, a window into the development process and a valuation of the benefits.
NASA Astrophysics Data System (ADS)
Cai, Y.
2017-12-01
Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.
NASA Technical Reports Server (NTRS)
Haugen, H. K.; Weitz, E.; Leone, S. R.
1985-01-01
Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Acoustic Full Waveform Inversion to Characterize Near-surface Chemical Explosions
NASA Astrophysics Data System (ADS)
Kim, K.; Rodgers, A. J.
2015-12-01
Recent high-quality, atmospheric overpressure data from chemical high-explosive experiments provide a unique opportunity to characterize near-surface explosions, specifically estimating yield and source time function. Typically, yield is estimated from measured signal features, such as peak pressure, impulse, duration and/or arrival time of acoustic signals. However, the application of full waveform inversion to acoustic signals for yield estimation has not been fully explored. In this study, we apply a full waveform inversion method to local overpressure data to extract accurate pressure-time histories of acoustics sources during chemical explosions. A robust and accurate inversion technique for acoustic source is investigated using numerical Green's functions that take into account atmospheric and topographic propagation effects. The inverted pressure-time history represents the pressure fluctuation at the source region associated with the explosion, and thus, provides a valuable information about acoustic source mechanisms and characteristics in greater detail. We compare acoustic source properties (i.e., peak overpressure, duration, and non-isotropic shape) of a series of explosions having different emplacement conditions and investigate the relationship of the acoustic sources to the yields of explosions. The time histories of acoustic sources may refine our knowledge of sound-generation mechanisms of shallow explosions, and thereby allow for accurate yield estimation based on acoustic measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Identification of saline soils with multi-year remote sensing of crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D; Ortiz-Monasterio, I; Gurrola, F C
2006-10-17
Soil salinity is an important constraint to agricultural sustainability, but accurate information on its variation across agricultural regions or its impact on regional crop productivity remains sparse. We evaluated the relationships between remotely sensed wheat yields and salinity in an irrigation district in the Colorado River Delta Region. The goals of this study were to (1) document the relative importance of salinity as a constraint to regional wheat production and (2) develop techniques to accurately identify saline fields. Estimates of wheat yield from six years of Landsat data agreed well with ground-based records on individual fields (R{sup 2} = 0.65).more » Salinity measurements on 122 randomly selected fields revealed that average 0-60 cm salinity levels > 4 dS m{sup -1} reduced wheat yields, but the relative scarcity of such fields resulted in less than 1% regional yield loss attributable to salinity. Moreover, low yield was not a reliable indicator of high salinity, because many other factors contributed to yield variability in individual years. However, temporal analysis of yield images showed a significant fraction of fields exhibited consistently low yields over the six year period. A subsequent survey of 60 additional fields, half of which were consistently low yielding, revealed that this targeted subset had significantly higher salinity at 30-60 cm depth than the control group (p = 0.02). These results suggest that high subsurface salinity is associated with consistently low yields in this region, and that multi-year yield maps derived from remote sensing therefore provide an opportunity to map salinity across agricultural regions.« less
Cultivar evaluation and essential test locations identification for sugarcane breeding in China.
Luo, Jun; Pan, Yong-Bao; Xu, Liping; Zhang, Hua; Yuan, Zhaonian; Deng, Zuhu; Chen, Rukai; Que, Youxiong
2014-01-01
The discrepancies across test sites and years, along with the interaction between cultivar and environment, make it difficult to accurately evaluate the differences of the sugarcane cultivars. Using a genotype main effect plus genotype-environment interaction (GGE) Biplot software, the yield performance data of seven sugarcane cultivars in the 8th Chinese National Sugarcane Regional Tests were analyzed to identify cultivars recommended for commercial release. Fn38 produced a high and stable sugar yield. Gn02-70 had the lowest cane yield with high stability. Yz06-407 was a high cane yield cultivar with poor stability in sugar yield. Yz05-51 and Lc03-1137 had an unstable cane yield but relatively high sugar yield. Fn39 produced stable high sugar yield with low and unstable cane production. Significantly different sugar and cane yields were observed across seasons due to strong cultivar-environment interactions. Three areas, Guangxi Chongzuo, Guangxi Baise, and Guangxi Hechi, showed better representativeness of cane yield and sugar content than the other four areas. On the other hand, the areas Guangxi Chongzuo, Yunnan Lincang, and Yunnan Baoshan showed strong discrimination ability, while the areas Guangxi Hechi and Guangxi Liuzhou showed poor discrimination ability. This study provides a reference for cultivar evaluation and essential test locations identification for sugarcane breeding in China.
Cultivar Evaluation and Essential Test Locations Identification for Sugarcane Breeding in China
Luo, Jun; Xu, Liping; Zhang, Hua; Yuan, Zhaonian; Deng, Zuhu; Chen, Rukai
2014-01-01
The discrepancies across test sites and years, along with the interaction between cultivar and environment, make it difficult to accurately evaluate the differences of the sugarcane cultivars. Using a genotype main effect plus genotype-environment interaction (GGE) Biplot software, the yield performance data of seven sugarcane cultivars in the 8th Chinese National Sugarcane Regional Tests were analyzed to identify cultivars recommended for commercial release. Fn38 produced a high and stable sugar yield. Gn02-70 had the lowest cane yield with high stability. Yz06-407 was a high cane yield cultivar with poor stability in sugar yield. Yz05-51 and Lc03-1137 had an unstable cane yield but relatively high sugar yield. Fn39 produced stable high sugar yield with low and unstable cane production. Significantly different sugar and cane yields were observed across seasons due to strong cultivar-environment interactions. Three areas, Guangxi Chongzuo, Guangxi Baise, and Guangxi Hechi, showed better representativeness of cane yield and sugar content than the other four areas. On the other hand, the areas Guangxi Chongzuo, Yunnan Lincang, and Yunnan Baoshan showed strong discrimination ability, while the areas Guangxi Hechi and Guangxi Liuzhou showed poor discrimination ability. This study provides a reference for cultivar evaluation and essential test locations identification for sugarcane breeding in China. PMID:24982939
Tantau, L J; Chantler, C T; Bourke, J D; Islam, M T; Payne, A T; Rae, N A; Tran, C Q
2015-07-08
We use the x-ray extended range technique (XERT) to experimentally determine the mass attenuation coefficient of silver in the x-ray energy range 11 kev-28 kev including the silver K absorption edge. The results are accurate to better than 0.1%, permitting critical tests of atomic and solid state theory. This is one of the most accurate demonstrations of cross-platform accuracy in synchrotron studies thus far. We derive the mass absorption coefficients and the imaginary component of the form factor over this range. We apply conventional XAFS analytic techniques, extended to include error propagation and uncertainty, yielding bond lengths accurate to approximately 0.24% and thermal Debye-Waller parameters accurate to 30%. We then introduce the FDMX technique for accurate analysis of such data across the full XAFS spectrum, built on full-potential theory, yielding a bond length accuracy of order 0.1% and the demonstration that a single Debye parameter is inadequate and inconsistent across the XAFS range. Two effective Debye-Waller parameters are determined: a high-energy value based on the highly-correlated motion of bonded atoms (σ(DW) = 0.1413(21) Å), and an uncorrelated bulk value (σ(DW) = 0.1766(9) Å) in good agreement with that derived from (room-temperature) crystallography.
Hrabok, Marianne; Brooks, Brian L; Fay-McClymont, Taryn B; Sherman, Elisabeth M S
2014-01-01
The purpose of this article was to investigate the accuracy of the WISC-IV short forms in estimating Full Scale Intelligence Quotient (FSIQ) and General Ability Index (GAI) in pediatric epilepsy. One hundred and four children with epilepsy completed the WISC-IV as part of a neuropsychological assessment at a tertiary-level children's hospital. The clinical accuracy of eight short forms was assessed in two ways: (a) accuracy within +/- 5 index points of FSIQ and (b) the clinical classification rate according to Wechsler conventions. The sample was further subdivided into low FSIQ (≤ 80) and high FSIQ (> 80). All short forms were significantly correlated with FSIQ. Seven-subtest (Crawford et al. [2010] FSIQ) and 5-subtest (BdSiCdVcLn) short forms yielded the highest clinical accuracy rates (77%-89%). Overall, a 2-subtest (VcMr) short form yielded the lowest clinical classification rates for FSIQ (35%-63%). The short form yielding the most accurate estimate of GAI was VcSiMrBd (73%-84%). Short forms show promise as useful estimates. The 7-subtest (Crawford et al., 2010) and 5-subtest (BdSiVcLnCd) short forms yielded the most accurate estimates of FSIQ. VcSiMrBd yielded the most accurate estimate of GAI. Clinical recommendations are provided for use of short forms in pediatric epilepsy.
X-ray power and yield measurements at the refurbished Z machine
Jones, M. C.; Ampleford, D. J.; Cuneo, M. E.; ...
2014-08-04
Advancements have been made in the diagnostic techniques to measure accurately the total radiated x-ray yield and power from z-pinch loads at the Z Machine with high accuracy. The Z-accelerator is capable of outputting 2MJ and 330 TW of x-ray yield and power, and accurately measuring these quantities is imperative. We will describe work over the past several years which include the development of new diagnostics, improvements to existing diagnostics, and implementation of automated data analysis routines. A set of experiments were conducted on the Z machine where the load and machine configuration were held constant. During this shot series,more » it was observed that total z-pinch x-ray emission power determined from the two common techniques for inferring the x-ray power, Kimfol filtered x-ray diode diagnostic and the Total Power and Energy diagnostic gave 450 TW and 327 TW respectively. Our analysis shows the latter to be the more accurate interpretation. More broadly, the comparison demonstrates the necessity to consider spectral response and field of view when inferring xray powers from z-pinch sources.« less
NASA Astrophysics Data System (ADS)
Carter, Elizabeth K.; Melkonian, Jeff; Riha, Susan J.; Shaw, Stephen B.
2016-09-01
Several recent studies have indicated that high air temperatures are limiting maize (Zea mays L.) yields in the US Corn Belt and project significant yield losses with expected increases in growing season temperatures. Further work has suggested that high air temperatures are indicative of high evaporative demand, and that decreases in maize yields which correlate to high temperatures and vapor pressure deficits (VPD) likely reflect underlying soil moisture limitations. It remains unclear whether direct high temperature impacts on yields, independent of moisture stress, can be observed under current temperature regimes. Given that projected high temperature and moisture may not co-vary the same way as they have historically, quantitative analyzes of direct temperature impacts are critical for accurate yield projections and targeted mitigation strategies under shifting temperature regimes. To evaluate yield response to above optimum temperatures independent of soil moisture stress, we analyzed climate impacts on irrigated maize yields obtained from the National Corn Growers Association (NCGA) corn yield contests for Nebraska, Kansas and Missouri. In irrigated maize, we found no evidence of a direct negative impact on yield by daytime air temperature, calculated canopy temperature, or VPD when analyzed seasonally. Solar radiation was the primary yield-limiting climate variable. Our analyses suggested that elevated night temperature impacted yield by increasing rates of phenological development. High temperatures during grain-fill significantly interacted with yields, but this effect was often beneficial and included evidence of acquired thermo-tolerance. Furthermore, genetics and management—information uniquely available in the NCGA contest data—explained more yield variability than climate, and significantly modified crop response to climate. Thermo-acclimation, improved genetics and changes to management practices have the potential to partially or completely offset temperature-related yield losses in irrigated maize.
Solving the shrinkage-induced PDMS alignment registration issue in multilayer soft lithography
NASA Astrophysics Data System (ADS)
Moraes, Christopher; Sun, Yu; Simmons, Craig A.
2009-06-01
Shrinkage of polydimethylsiloxane (PDMS) complicates alignment registration between layers during multilayer soft lithography fabrication. This often hinders the development of large-scale microfabricated arrayed devices. Here we report a rapid method to construct large-area, multilayered devices with stringent alignment requirements. This technique, which exploits a previously unrecognized aspect of sandwich mold fabrication, improves device yield, enables highly accurate alignment over large areas of multilayered devices and does not require strict regulation of fabrication conditions or extensive calibration processes. To demonstrate this technique, a microfabricated Braille display was developed and characterized. High device yield and accurate alignment within 15 µm were achieved over three layers for an array of 108 Braille units spread over a 6.5 cm2 area, demonstrating the fabrication of well-aligned devices with greater ease and efficiency than previously possible.
Kelly, Nicola; McGarry, J Patrick
2012-05-01
The inelastic pressure dependent compressive behaviour of bovine trabecular bone is investigated through experimental and computational analysis. Two loading configurations are implemented, uniaxial and confined compression, providing two distinct loading paths in the von Mises-pressure stress plane. Experimental results reveal distinctive yielding followed by a constant nominal stress plateau for both uniaxial and confined compression. Computational simulation of the experimental tests using the Drucker-Prager and Mohr-Coulomb plasticity models fails to capture the confined compression behaviour of trabecular bone. The high pressure developed during confined compression does not result in plastic deformation using these formulations, and a near elastic response is computed. In contrast, the crushable foam plasticity models provide accurate simulation of the confined compression tests, with distinctive yield and plateau behaviour being predicted. The elliptical yield surfaces of the crushable foam formulations in the von Mises-pressure stress plane accurately characterise the plastic behaviour of trabecular bone. Results reveal that the hydrostatic yield stress is equal to the uniaxial yield stress for trabecular bone, demonstrating the importance of accurate characterisation and simulation of the pressure dependent plasticity. It is also demonstrated in this study that a commercially available trabecular bone analogue material, cellular rigid polyurethane foam, exhibits similar pressure dependent yield behaviour, despite having a lower stiffness and strength than trabecular bone. This study provides a novel insight into the pressure dependent yield behaviour of trabecular bone, demonstrating the inadequacy of uniaxial testing alone. For the first time, crushable foam plasticity formulations are implemented for trabecular bone. The enhanced understanding of the inelastic behaviour of trabecular bone established in this study will allow for more realistic simulation of orthopaedic device implantation and failure. Copyright © 2011 Elsevier Ltd. All rights reserved.
Rising temperatures reduce global wheat production
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.; Reynolds, M. P.; Alderman, P. D.; Prasad, P. V. V.; Aggarwal, P. K.; Anothai, J.; Basso, B.; Biernath, C.; Challinor, A. J.; de Sanctis, G.; Doltra, J.; Fereres, E.; Garcia-Vila, M.; Gayler, S.; Hoogenboom, G.; Hunt, L. A.; Izaurralde, R. C.; Jabloun, M.; Jones, C. D.; Kersebaum, K. C.; Koehler, A.-K.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Palosuo, T.; Priesack, E.; Eyshi Rezaei, E.; Ruane, A. C.; Semenov, M. A.; Shcherbak, I.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Thorburn, P. J.; Waha, K.; Wang, E.; Wallach, D.; Wolf, J.; Zhao, Z.; Zhu, Y.
2015-02-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 °C to 32 °C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each °C of further temperature increase and become more variable over space and time.
Rising Temperatures Reduce Global Wheat Production
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.;
2015-01-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 degrees C to 32? degrees C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each degree C of further temperature increase and become more variable over space and time.
Atomic Oxygen Erosion Yield Prediction for Spacecraft Polymers in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Backus, Jane A.; Manno, Michael V.; Waters, Deborah L.; Cameron, Kevin C.; deGroh, Kim K.
2009-01-01
The ability to predict the atomic oxygen erosion yield of polymers based on their chemistry and physical properties has been only partially successful because of a lack of reliable low Earth orbit (LEO) erosion yield data. Unfortunately, many of the early experiments did not utilize dehydrated mass loss measurements for erosion yield determination, and the resulting mass loss due to atomic oxygen exposure may have been compromised because samples were often not in consistent states of dehydration during the pre-flight and post-flight mass measurements. This is a particular problem for short duration mission exposures or low erosion yield materials. However, as a result of the retrieval of the Polymer Erosion and Contamination Experiment (PEACE) flown as part of the Materials International Space Station Experiment 2 (MISSE 2), the erosion yields of 38 polymers and pyrolytic graphite were accurately measured. The experiment was exposed to the LEO environment for 3.95 years from August 16, 2001 to July 30, 2005 and was successfully retrieved during a space walk on July 30, 2005 during Discovery s STS-114 Return to Flight mission. The 40 different materials tested (including Kapton H fluence witness samples) were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The MISSE 2 PEACE Polymers experiment used carefully dehydrated mass measurements, as well as accurate density measurements to obtain accurate erosion yield data for high-fluence (8.43 1021 atoms/sq cm). The resulting data was used to develop an erosion yield predictive tool with a correlation coefficient of 0.895 and uncertainty of +/-6.3 10(exp -25)cu cm/atom. The predictive tool utilizes the chemical structures and physical properties of polymers to predict in-space atomic oxygen erosion yields. A predictive tool concept (September 2009 version) is presented which represents an improvement over an earlier (December 2008) version.
Measurements of Electrical and Electron Emission Properties of Highly Insulating Materials
NASA Technical Reports Server (NTRS)
Dennison, J. R.; Brunson, Jerilyn; Hoffman, Ryan; Abbott, Jonathon; Thomson, Clint; Sim, Alec
2005-01-01
Highly insulating materials often acquire significant charges when subjected to fluxes of electrons, ions, or photons. This charge can significantly modify the materials properties of the materials and have profound effects on the functionality of the materials in a variety of applications. These include charging of spacecraft materials due to interactions with the severe space environment, enhanced contamination due to charging in Lunar of Martian environments, high power arching of cables and sources, modification of tethers and ion thrusters for propulsion, and scanning electron microscopy, to name but a few examples. This paper describes new techniques and measurements of the electron emission properties and resistivity of highly insulating materials. Electron yields are a measure of the number of electrons emitted from a material per incident particle (electron, ion or photon). Electron yields depend on incident species, energy and angle, and on the material. They determine the net charge acquired by a material subject to a give incident flu. New pulsed-beam techniques will be described that allow accurate measurement of the yields for uncharged insulators and measurements of how the yields are modified as charge builds up in the insulator. A key parameter in modeling charge dissipation is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across an insulator, as well as the time scale for charge transport and dissipation. Comparison of new long term constant-voltage methods and charge storage methods for measuring resistivity of highly insulating materials will be compared to more commonly used, but less accurate methods.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Tanger, Paul; Klassen, Stephen; Mojica, Julius P.; Lovell, John T.; Moyers, Brook T.; Baraoidan, Marietta; Naredo, Maria Elizabeth B.; McNally, Kenneth L.; Poland, Jesse; Bush, Daniel R.; Leung, Hei; Leach, Jan E.; McKay, John K.
2017-01-01
To ensure food security in the face of population growth, decreasing water and land for agriculture, and increasing climate variability, crop yields must increase faster than the current rates. Increased yields will require implementing novel approaches in genetic discovery and breeding. Here we demonstrate the potential of field-based high throughput phenotyping (HTP) on a large recombinant population of rice to identify genetic variation underlying important traits. We find that detecting quantitative trait loci (QTL) with HTP phenotyping is as accurate and effective as traditional labor-intensive measures of flowering time, height, biomass, grain yield, and harvest index. Genetic mapping in this population, derived from a cross of an modern cultivar (IR64) with a landrace (Aswina), identified four alleles with negative effect on grain yield that are fixed in IR64, demonstrating the potential for HTP of large populations as a strategy for the second green revolution. PMID:28220807
Computer-Graphics Emulation of Chemical Instrumentation: Absorption Spectrophotometers.
ERIC Educational Resources Information Center
Gilbert, D. D.; And Others
1982-01-01
Describes interactive, computer-graphics program emulating behavior of high resolution, ultraviolet-visible analog recording spectrophotometer. Graphics terminal behaves as recording absorption spectrophotometer. Objective of the emulation is study of optimization of the instrument to yield accurate absorption spectra, including…
Corrêa, A M; Pereira, M I S; de Abreu, H K A; Sharon, T; de Melo, C L P; Ito, M A; Teodoro, P E; Bhering, L L
2016-10-17
The common bean, Phaseolus vulgaris, is predominantly grown on small farms and lacks accurate genotype recommendations for specific micro-regions in Brazil. This contributes to a low national average yield. The aim of this study was to use the methods of the harmonic mean of the relative performance of genetic values (HMRPGV) and the centroid, for selecting common bean genotypes with high yield, adaptability, and stability for the Cerrado/Pantanal ecotone region in Brazil. We evaluated 11 common bean genotypes in three trials carried out in the dry season in Aquidauana in 2013, 2014, and 2015. A likelihood ratio test detected a significant interaction between genotype x year, contributing 54% to the total phenotypic variation in grain yield. The three genotypes selected by the joint analysis of genotypic values in all years (Carioca Precoce, BRS Notável, and CNFC 15875) were the same as those recommended by the HMRPGV method. Using the centroid method, genotypes BRS Notável and CNFC 15875 were considered ideal genotypes based on their high stability to unfavorable environments and high responsiveness to environmental improvement. We identified a high association between the methods of adaptability and stability used in this study. However, the use of centroid method provided a more accurate and precise recommendation of the behavior of the evaluated genotypes.
2010-01-01
Catalytic graphitization for 14C-accelerator mass spectrometry (14C-AMS) produced various forms of elemental carbon. Our high-throughput Zn reduction method (C/Fe = 1:5, 500 °C, 3 h) produced the AMS target of graphite-coated iron powder (GCIP), a mix of nongraphitic carbon and Fe3C. Crystallinity of the AMS targets of GCIP (nongraphitic carbon) was increased to turbostratic carbon by raising the C/Fe ratio from 1:5 to 1:1 and the graphitization temperature from 500 to 585 °C. The AMS target of GCIP containing turbostratic carbon had a large isotopic fractionation and a low AMS ion current. The AMS target of GCIP containing turbostratic carbon also yielded less accurate/precise 14C-AMS measurements because of the lower graphitization yield and lower thermal conductivity that were caused by the higher C/Fe ratio of 1:1. On the other hand, the AMS target of GCIP containing nongraphitic carbon had higher graphitization yield and better thermal conductivity over the AMS target of GCIP containing turbostratic carbon due to optimal surface area provided by the iron powder. Finally, graphitization yield and thermal conductivity were stronger determinants (over graphite crystallinity) for accurate/precise/high-throughput biological, biomedical, and environmental14C-AMS applications such as absorption, distribution, metabolism, elimination (ADME), and physiologically based pharmacokinetics (PBPK) of nutrients, drugs, phytochemicals, and environmental chemicals. PMID:20163100
Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators
NASA Technical Reports Server (NTRS)
Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.
2012-01-01
The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.
Absolute quantum yield measurement of powder samples.
Moreno, Luis A
2012-05-12
Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry. Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material. For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system. Measurement of quantum yield in powder samples is performed following these steps: 1. Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations. 2. Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements. 3. Reference and Sample measurement using direct excitation and indirect excitation. 4. Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct quantum yield calculation. 5. Corrected quantum yield calculation. 6. Chromaticity coordinates calculation using Report Generator program. The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this application, as follows: High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate. Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions. High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function. Measurement of quantum yield over a wide wavelength range from 240 to 800 nm. Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample. Large selection of calculated parameters provided by dedicated and easy to use software. During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.
Computation of Calcium Score with Dual Energy CT: A Phantom Study
Kumar, Vidhya; Min, James K.; He, Xin; Raman, Subha V.
2016-01-01
Dual energy computed tomography (DECT) improves material and tissue characterization compared to single energy CT (SECT); we sought to validate coronary calcium quantification in advancing cardiovascular DECT. In an anthropomorphic phantom, agreement between measurements was excellent, and Bland-Altman analysis demonstrated minimal bias. Compared to the known calcium mass for each phantom, calcium mass by DECT was highly accurate. Noncontrast DECT yields accurate calcium measures, and warrants consideration in cardiac protocols for additional tissue characterizations. PMID:27680414
High-speed engine/component performance assessment using exergy and thrust-based methods
NASA Technical Reports Server (NTRS)
Riggins, D. W.
1996-01-01
This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.
Tanger, Paul; Klassen, Stephen; Mojica, Julius P.; ...
2017-02-21
In order to ensure food security in the face of population growth, decreasing water and land for agriculture, and increasing climate variability, crop yields must increase faster than the current rates. Increased yields will require implementing novel approaches in genetic discovery and breeding. We demonstrate the potential of field-based high throughput phenotyping (HTP) on a large recombinant population of rice to identify genetic variation underlying important traits. We find that detecting quantitative trait loci (QTL) with HTP phenotyping is as accurate and effective as traditional labor- intensive measures of flowering time, height, biomass, grain yield, and harvest index. Furthermore, geneticmore » mapping in this population, derived from a cross of an modern cultivar (IR64) with a landrace (Aswina), identified four alleles with negative effect on grain yield that are fixed in IR64, demonstrating the potential for HTP of large populations as a strategy for the second green revolution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanger, Paul; Klassen, Stephen; Mojica, Julius P.
In order to ensure food security in the face of population growth, decreasing water and land for agriculture, and increasing climate variability, crop yields must increase faster than the current rates. Increased yields will require implementing novel approaches in genetic discovery and breeding. We demonstrate the potential of field-based high throughput phenotyping (HTP) on a large recombinant population of rice to identify genetic variation underlying important traits. We find that detecting quantitative trait loci (QTL) with HTP phenotyping is as accurate and effective as traditional labor- intensive measures of flowering time, height, biomass, grain yield, and harvest index. Furthermore, geneticmore » mapping in this population, derived from a cross of an modern cultivar (IR64) with a landrace (Aswina), identified four alleles with negative effect on grain yield that are fixed in IR64, demonstrating the potential for HTP of large populations as a strategy for the second green revolution.« less
Random Forests for Global and Regional Crop Yield Predictions.
Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung
2016-01-01
Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.
NASA Technical Reports Server (NTRS)
Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.
1992-01-01
State-to-state reaction probabilities are found to be highly final-state specific at state-selected threshold energies for the reactions O + H2 yield OH + H and H + H2 yield H2 + H. The study includes initial rotational states with quantum numbers 0-15, and the specificity is especially dramatic for the more highly rotationally excited reactants. The analysis is based on accurate quantum mechanical reactive scattering calculations. Final-state specificity is shown in general to increase with the rotational quantum number of the reactant diatom, and the trends are confirmed for both zero and nonzero values of the total angular momentum.
Xenon Defects in Uranium Dioxide From First Principles and Interatomic Potentials
NASA Astrophysics Data System (ADS)
Thompson, Alexander
In this thesis, we examine the defect energetics and migration energies of xenon atoms in uranium dioxide (UO2) from first principles and interatomic potentials. We also parameterize new, accurate interatomic potentials for xenon and uranium dioxide. To achieve accurate energetics and provide a foundation for subsequent calculations, we address difficulties in finding consistent energetics within Hubbard U corrected density functional theory (DFT+U). We propose a method of slowly ramping the U parameter in order to guide the calculation into low energy orbital occupations. We find that this method is successful for a variety of materials. We then examine the defect energetics of several noble gas atoms in UO2 for several different defect sites. We show that the energy to incorporate large noble gas atoms into interstitial sites is so large that it is energetically favorable for a Schottky defect cluster to be created to relieve the strain. We find that, thermodynamically, xenon will rarely ever be in the interstitial site of UO2. To study larger defects associated with the migration of xenon in UO 2, we turn to interatomic potentials. We benchmark several previously published potentials against DFT+U defect energetics and migration barriers. Using a combination of molecular dynamics and nudged elastic band calculations, we find a new, low energy migration pathway for xenon in UO2. We create a new potential for xenon that yields accurate defect energetics. We fit this new potential with a method we call Iterative Potential Refinement that parameterizes potentials to first principles data via a genetic algorithm. The potential finds accurate energetics for defects with relatively low amounts of strain (xenon in defect clusters). It is important to find accurate energetics for these sorts of low-strain defects because they essentially represent small xenon bubbles. Finally, we parameterize a new UO2 potential that simultaneously yields accurate vibrational properties and defect energetics, important properties for UO2 because of the high temperature and defective reactor environment.. Previously published potentials could only yield accurate defect energetics or accurate phonons, but never both.
NASA Astrophysics Data System (ADS)
Pahlavani, M. R.; Motevalli, S. M.
2008-03-01
The muon catalyzed fusion cycle in mixtures of deuterium and tritium is of particular interest due to the observation of high fusion yields. In the D-T mixture, the most serious limitation to the efficiency of the fusion chain is the probability of muon sticking to the alpha -particle produced in the nuclear reaction. An accurate kinetic treatment has been applied to the muonic helium atoms formed by a muon sticking to the alpha -particles. In this work accurate rates for collisions of alpha mu + ions with hydrogen atoms have been used for calculation of muon stripping probability and the intensities of X-ray transitions by solving a set of coupled differential equations numerically. Our calculated results are in good agreement with experimental data available in literature.
Lights, camera, action: high-throughput plant phenotyping is ready for a close-up
USDA-ARS?s Scientific Manuscript database
Modern techniques for crop improvement rely on both DNA sequencing and accurate quantification of plant traits to identify genes and germplasm of interest. With rapid advances in DNA sequencing technologies, plant phenotyping is now a bottleneck in advancing crop yields [1,2]. Furthermore, the envir...
Accuracy of quantum sensors measuring yield photon flux and photosynthetic photon flux
NASA Technical Reports Server (NTRS)
Barnes, C.; Tibbitts, T.; Sager, J.; Deitzer, G.; Bubenheim, D.; Koerner, G.; Bugbee, B.; Knott, W. M. (Principal Investigator)
1993-01-01
Photosynthesis is fundamentally driven by photon flux rather than energy flux, but not all absorbed photons yield equal amounts of photosynthesis. Thus, two measures of photosynthetically active radiation have emerged: photosynthetic photon flux (PPF), which values all photons from 400 to 700 nm equally, and yield photon flux (YPF), which weights photons in the range from 360 to 760 nm according to plant photosynthetic response. We selected seven common radiation sources and measured YPF and PPF from each source with a spectroradiometer. We then compared these measurements with measurements from three quantum sensors designed to measure YPF, and from six quantum sensors designed to measure PPF. There were few differences among sensors within a group (usually <5%), but YPF values from sensors were consistently lower (3% to 20%) than YPF values calculated from spectroradiometric measurements. Quantum sensor measurements of PPF also were consistently lower than PPF values calculated from spectroradiometric measurements, but the differences were <7% for all sources, except red-light-emitting diodes. The sensors were most accurate for broad-band sources and least accurate for narrow-band sources. According to spectroradiometric measurements, YPF sensors were significantly less accurate (>9% difference) than PPF sensors under metal halide, high-pressure sodium, and low-pressure sodium lamps. Both sensor types were inaccurate (>18% error) under red-light-emitting diodes. Because both YPF and PPF sensors are imperfect integrators, and because spectroradiometers can measure photosynthetically active radiation much more accurately, researchers should consider developing calibration factors from spectroradiometric data for some specific radiation sources to improve the accuracy of integrating sensors.
Final Report on X-ray Yields from OMEGA II Targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K B; May, M J; MacLaren, S A
2007-06-20
We present details about X-ray yields measured with Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL) diagnostics in soft and moderately hard X-ray bands from laser-driven, doped-aerogel targets shot on 07/14/06 during the OMEGA II test series. Yields accurate to {+-}25% in the 5-15 keV band are measured with Livermore's HENWAY spectrometer. Yields in the sub-keV to 3.2 keV band are measured with LLNL's DANTE diagnostic, the DANTE yields are accurate to 10-15%. SNL ran a PCD-based diagnostic that also measured X-ray yields in the spectral region above 4 keV, and also down to the sub-keV range. Themore » PCD and HENWAY and DANTE numbers are compared. The time histories of the moderately hard (h{nu} > 4 keV) X-ray signals are measured with LLNL's H11 PCD, and from two SNL PCDs with comparable filtration. There is general agreement between the H11 PCD and SNL PCD measured FWHM except for two of the shorter-laser-pulse shots, which is shown not to be due to analysis techniques. The recommended X-ray waveform is that from the SNL PCD p66k10, which was recorded on a fast, high-bandwidth TDS 6804 oscilloscope. X-ray waveforms from target emission in two softer spectral bands are also shown; the X-ray emissions have increasing duration as the spectral content gets softer.« less
Added-values of high spatiotemporal remote sensing data in crop yield estimation
NASA Astrophysics Data System (ADS)
Gao, F.; Anderson, M. C.
2017-12-01
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing derived parameters have been used for estimating crop yield by using either empirical or crop growth models. The uses of remote sensing vegetation index (VI) in crop yield modeling have been typically evaluated at regional and country scales using coarse spatial resolution (a few hundred to kilo-meters) data or assessed over a small region at field level using moderate resolution spatial resolution data (10-100m). Both data sources have shown great potential in capturing spatial and temporal variability in crop yield. However, the added value of data with both high spatial and temporal resolution data has not been evaluated due to the lack of such data source with routine, global coverage. In recent years, more moderate resolution data have become freely available and data fusion approaches that combine data acquired from different spatial and temporal resolutions have been developed. These make the monitoring crop condition and estimating crop yield at field scale become possible. Here we investigate the added value of the high spatial and temporal VI for describing variability of crop yield. The explanatory ability of crop yield based on high spatial and temporal resolution remote sensing data was evaluated in a rain-fed agricultural area in the U.S. Corn Belt. Results show that the fused Landsat-MODIS (high spatial and temporal) VI explains yield variability better than single data source (Landsat or MODIS alone), with EVI2 performing slightly better than NDVI. The maximum VI describes yield variability better than cumulative VI. Even though VI is effective in explaining yield variability within season, the inter-annual variability is more complex and need additional information (e.g. weather, water use and management). Our findings augment the importance of high spatiotemporal remote sensing data and supports new moderate resolution satellite missions for agricultural applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, Afshan N., E-mail: afshan.malik@kcl.ac.uk; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana
2011-08-19
Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that themore » methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.« less
Zhao, Bingwei; Wang, Xin; Yang, Xiaoyi
2015-12-01
Co-pyrolysis characteristics of Isochrysis (high lipid) and Chlorella (high protein) were investigated qualitatively and quantitatively based on DTG curves, biocrude yield and composition by individual pyrolysis and co-pyrolysis. DTG curves in co-pyrolysis have been compared accurately with those in individual pyrolysis. An interaction has been detected at 475-500°C in co-pyrolysis based on biocrude yields, and co-pyrolysis reaction mechanism appear three-dimensional diffusion in comparison with random nucleation followed by growth in individual pyrolysis based on kinetic analysis. There is no obvious difference in the maximum biocrude yields for individual pyrolysis and co-pyrolysis, but carboxylic acids (IC21) decreased and N-heterocyclic compounds (IC12) increased in co-pyrolysis. Simulation results of biocrude yield by Components Biofuel Model and Kinetics Biofuel Model indicate that the processes of co-pyrolysis comply with those of individual pyrolysis in solid phase by and large. Variation of percentage content in co-pyrolysis and individual pyrolysis biocrude indicated interaction in gas phase. Copyright © 2015. Published by Elsevier Ltd.
Liu, Xiaojun; Ferguson, Richard B.; Zheng, Hengbiao; Cao, Qiang; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-01-01
The successful development of an optimal canopy vegetation index dynamic model for obtaining higher yield can offer a technical approach for real-time and nondestructive diagnosis of rice (Oryza sativa L) growth and nitrogen (N) nutrition status. In this study, multiple rice cultivars and N treatments of experimental plots were carried out to obtain: normalized difference vegetation index (NDVI), leaf area index (LAI), above-ground dry matter (DM), and grain yield (GY) data. The quantitative relationships between NDVI and these growth indices (e.g., LAI, DM and GY) were analyzed, showing positive correlations. Using the normalized modeling method, an appropriate NDVI simulation model of rice was established based on the normalized NDVI (RNDVI) and relative accumulative growing degree days (RAGDD). The NDVI dynamic model for high-yield production in rice can be expressed by a double logistic model: RNDVI=(1+e−15.2829×(RAGDDi−0.1944))−1−(1+e−11.6517×(RAGDDi−1.0267))−1 (R2 = 0.8577**), which can be used to accurately predict canopy NDVI dynamic changes during the entire growth period. Considering variation among rice cultivars, we constructed two relative NDVI (RNDVI) dynamic models for Japonica and Indica rice types, with R2 reaching 0.8764** and 0.8874**, respectively. Furthermore, independent experimental data were used to validate the RNDVI dynamic models. The results showed that during the entire growth period, the accuracy (k), precision (R2), and standard deviation of RNDVI dynamic models for the Japonica and Indica cultivars were 0.9991, 1.0170; 0.9084**, 0.8030**; and 0.0232, 0.0170, respectively. These results indicated that RNDVI dynamic models could accurately reflect crop growth and predict dynamic changes in high-yield crop populations, providing a rapid approach for monitoring rice growth status. PMID:28338637
Liu, Xiaojun; Ferguson, Richard B; Zheng, Hengbiao; Cao, Qiang; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-03-24
The successful development of an optimal canopy vegetation index dynamic model for obtaining higher yield can offer a technical approach for real-time and nondestructive diagnosis of rice (Oryza sativa L) growth and nitrogen (N) nutrition status. In this study, multiple rice cultivars and N treatments of experimental plots were carried out to obtain: normalized difference vegetation index (NDVI), leaf area index (LAI), above-ground dry matter (DM), and grain yield (GY) data. The quantitative relationships between NDVI and these growth indices (e.g., LAI, DM and GY) were analyzed, showing positive correlations. Using the normalized modeling method, an appropriate NDVI simulation model of rice was established based on the normalized NDVI (RNDVI) and relative accumulative growing degree days (RAGDD). The NDVI dynamic model for high-yield production in rice can be expressed by a double logistic model: RNDVI = ( 1 + e - 15.2829 × ( R A G D D i - 0.1944 ) ) - 1 - ( 1 + e - 11.6517 × ( R A G D D i - 1.0267 ) ) - 1 (R2 = 0.8577**), which can be used to accurately predict canopy NDVI dynamic changes during the entire growth period. Considering variation among rice cultivars, we constructed two relative NDVI (RNDVI) dynamic models for Japonica and Indica rice types, with R2 reaching 0.8764** and 0.8874**, respectively. Furthermore, independent experimental data were used to validate the RNDVI dynamic models. The results showed that during the entire growth period, the accuracy (k), precision (R2), and standard deviation of RNDVI dynamic models for the Japonica and Indica cultivars were 0.9991, 1.0170; 0.9084**, 0.8030**; and 0.0232, 0.0170, respectively. These results indicated that RNDVI dynamic models could accurately reflect crop growth and predict dynamic changes in high-yield crop populations, providing a rapid approach for monitoring rice growth status.
Growing C4 perennial grass for bioenergy using a new Agro-BGC ecosystem model
NASA Astrophysics Data System (ADS)
di Vittorio, A. V.; Anderson, R. S.; Miller, N. L.; Running, S. W.
2009-12-01
Accurate, spatially gridded estimates of bioenergy crop yields require 1) biophysically accurate crop growth models and 2) careful parameterization of unavailable inputs to these models. To meet the first requirement we have added the capacity to simulate C4 perennial grass as a bioenergy crop to the Biome-BGC ecosystem model. This new model, hereafter referred to as Agro-BGC, includes enzyme driven C4 photosynthesis, individual live and dead leaf, stem, and root carbon/nitrogen pools, separate senescence and litter fall processes, fruit growth, optional annual seeding, flood irrigation, a growing degree day phenology with a killing frost option, and a disturbance handler that effectively simulates fertilization, harvest, fire, and incremental irrigation. There are four Agro-BGC vegetation parameters that are unavailable for Panicum virgatum (switchgrass), and to meet the second requirement we have optimized the model across multiple calibration sites to obtain representative values for these parameters. We have verified simulated switchgrass yields against observations at three non-calibration sites in IL. Agro-BGC simulates switchgrass growth and yield at harvest very well at a single site. Our results suggest that a multi-site optimization scheme would be adequate for producing regional-scale estimates of bioenergy crop yields on high spatial resolution grids.
Scanning electron microscope automatic defect classification of process induced defects
NASA Astrophysics Data System (ADS)
Wolfe, Scott; McGarvey, Steve
2017-03-01
With the integration of high speed Scanning Electron Microscope (SEM) based Automated Defect Redetection (ADR) in both high volume semiconductor manufacturing and Research and Development (R and D), the need for reliable SEM Automated Defect Classification (ADC) has grown tremendously in the past few years. In many high volume manufacturing facilities and R and D operations, defect inspection is performed on EBeam (EB), Bright Field (BF) or Dark Field (DF) defect inspection equipment. A comma separated value (CSV) file is created by both the patterned and non-patterned defect inspection tools. The defect inspection result file contains a list of the inspection anomalies detected during the inspection tools' examination of each structure, or the examination of an entire wafers surface for non-patterned applications. This file is imported into the Defect Review Scanning Electron Microscope (DRSEM). Following the defect inspection result file import, the DRSEM automatically moves the wafer to each defect coordinate and performs ADR. During ADR the DRSEM operates in a reference mode, capturing a SEM image at the exact position of the anomalies coordinates and capturing a SEM image of a reference location in the center of the wafer. A Defect reference image is created based on the Reference image minus the Defect image. The exact coordinates of the defect is calculated based on the calculated defect position and the anomalies stage coordinate calculated when the high magnification SEM defect image is captured. The captured SEM image is processed through either DRSEM ADC binning, exporting to a Yield Analysis System (YAS), or a combination of both. Process Engineers, Yield Analysis Engineers or Failure Analysis Engineers will manually review the captured images to insure that either the YAS defect binning is accurately classifying the defects or that the DRSEM defect binning is accurately classifying the defects. This paper is an exploration of the feasibility of the utilization of a Hitachi RS4000 Defect Review SEM to perform Automatic Defect Classification with the objective of the total automated classification accuracy being greater than human based defect classification binning when the defects do not require multiple process step knowledge for accurate classification. The implementation of DRSEM ADC has the potential to improve the response time between defect detection and defect classification. Faster defect classification will allow for rapid response to yield anomalies that will ultimately reduce the wafer and/or the die yield.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
2013-01-01
Background A major hindrance to the development of high yielding biofuel feedstocks is the ability to rapidly assess large populations for fermentable sugar yields. Whilst recent advances have outlined methods for the rapid assessment of biomass saccharification efficiency, none take into account the total biomass, or the soluble sugar fraction of the plant. Here we present a holistic high-throughput methodology for assessing sweet Sorghum bicolor feedstocks at 10 days post-anthesis for total fermentable sugar yields including stalk biomass, soluble sugar concentrations, and cell wall saccharification efficiency. Results A mathematical method for assessing whole S. bicolor stalks using the fourth internode from the base of the plant proved to be an effective high-throughput strategy for assessing stalk biomass, soluble sugar concentrations, and cell wall composition and allowed calculation of total stalk fermentable sugars. A high-throughput method for measuring soluble sucrose, glucose, and fructose using partial least squares (PLS) modelling of juice Fourier transform infrared (FTIR) spectra was developed. The PLS prediction was shown to be highly accurate with each sugar attaining a coefficient of determination (R 2 ) of 0.99 with a root mean squared error of prediction (RMSEP) of 11.93, 5.52, and 3.23 mM for sucrose, glucose, and fructose, respectively, which constitutes an error of <4% in each case. The sugar PLS model correlated well with gas chromatography–mass spectrometry (GC-MS) and brix measures. Similarly, a high-throughput method for predicting enzymatic cell wall digestibility using PLS modelling of FTIR spectra obtained from S. bicolor bagasse was developed. The PLS prediction was shown to be accurate with an R 2 of 0.94 and RMSEP of 0.64 μg.mgDW-1.h-1. Conclusions This methodology has been demonstrated as an efficient and effective way to screen large biofuel feedstock populations for biomass, soluble sugar concentrations, and cell wall digestibility simultaneously allowing a total fermentable yield calculation. It unifies and simplifies previous screening methodologies to produce a holistic assessment of biofuel feedstock potential. PMID:24365407
Optimising the Encapsulation of an Aqueous Bitter Melon Extract by Spray-Drying
Tan, Sing Pei; Kha, Tuyen Chan; Parks, Sophie; Stathopoulos, Costas; Roach, Paul D.
2015-01-01
Our aim was to optimise the encapsulation of an aqueous bitter melon extract by spray-drying with maltodextrin (MD) and gum Arabic (GA). The response surface methodology models accurately predicted the process yield and retentions of bioactive concentrations and activity (R2 > 0.87). The optimal formulation was predicted and validated as 35% (w/w) stock solution (MD:GA, 1:1) and a ratio of 1.5:1 g/g of the extract to the stock solution. The spray-dried powder had a high process yield (66.2% ± 9.4%) and high retention (>79.5% ± 8.4%) and the quality of the powder was high. Therefore, the bitter melon extract was well encapsulated into a powder using MD/GA and spray-drying. PMID:28231214
NASA Astrophysics Data System (ADS)
Pellereau, E.; Taïeb, J.; Chatillon, A.; Alvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Benlliure, J.; Boutoux, G.; Caamaño, M.; Casarejos, E.; Cortina-Gil, D.; Ebran, A.; Farget, F.; Fernández-Domínguez, B.; Gorbinet, T.; Grente, L.; Heinz, A.; Johansson, H.; Jurado, B.; Kelić-Heil, A.; Kurz, N.; Laurent, B.; Martin, J.-F.; Nociforo, C.; Paradela, C.; Pietri, S.; Rodríguez-Sánchez, J. L.; Schmidt, K.-H.; Simon, H.; Tassan-Got, L.; Vargas, J.; Voss, B.; Weick, H.
2017-05-01
SOFIA (Studies On Fission with Aladin) is a novel experimental program, dedicated to accurate measurements of fission-fragment isotopic yields. The setup allows us to fully identify, in nuclear charge and mass, both fission fragments in coincidence for the whole fission-fragment range. It was installed at the GSI facility (Darmstadt), to benefit from the relativistic heavy-ion beams available there, and thus to use inverse kinematics. This paper reports on fission yields obtained in electromagnetically induced fission of 238U.
Sentence Recall by Children with SLI across Two Nonmainstream Dialects of English
ERIC Educational Resources Information Center
Oetting, Janna B.; McDonald, Janet L.; Seidel, Christy M.; Hegarty, Michael
2016-01-01
Purpose: The inability to accurately recall sentences has proven to be a clinical marker of specific language impairment (SLI); this task yields moderate-to-high levels of sensitivity and specificity. However, it is not yet known if these results hold for speakers of dialects whose nonmainstream grammatical productions overlap with those that are…
ERIC Educational Resources Information Center
Kubiak, Sheryl Pimlott; Nnawulezi, Nkiru; Karim, Nidal; Sullivan, Cris M.; Beeble, Marisa L.
2012-01-01
Definitions vary on what constitutes sexual and/or physical abuse, and scholars have debated on which methods might yield the most accurate response rates for capturing this sensitive information. Although some studies suggest respondents prefer methods that provide anonymity, previous studies have not utilized high-risk or stigmatized…
Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela
2017-01-01
High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...
Quantum Monte Carlo for atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less
Atmospheric Fluorescence Yield
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Christl, M. J.; Fountain, W. F.; Gregory, J. C.; Martens, K.; Sokolsky, P.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Several existing and planned experiments estimate the energies of ultra-high energy cosmic rays from air showers using the atmospheric fluorescence from these showers. Accurate knowledge of the conversion from atmospheric fluorescence to energy loss by ionizing particles in the atmosphere is key to this technique. In this paper we discuss a small balloon-borne instrument to make the first in situ measurements versus altitude of the atmospheric fluorescence yield. The instrument can also be used in the lab to investigate the dependence of the fluorescence yield in air on temperature, pressure and the concentrations of other gases that present in the atmosphere. The results can be used to explore environmental effects on and improve the accuracy of cosmic ray energy measurements for existing ground-based experiments and future space-based experiments.
(18)F-FDG uptake predicts diagnostic yield of transbronchial biopsy in peripheral lung cancer.
Umeda, Yukihiro; Demura, Yoshiki; Anzai, Masaki; Matsuoka, Hiroki; Araya, Tomoyuki; Nishitsuji, Masaru; Nishi, Koichi; Tsuchida, Tatsuro; Sumida, Yasuyuki; Morikawa, Miwa; Ameshima, Shingo; Ishizaki, Takeshi; Kasahara, Kazuo; Ishizuka, Tamotsu
2014-07-01
Recent advances in endobronchial ultrasonography with a guide sheath (EBUS-GS) have enabled better visualization of distal airways, while virtual bronchoscopic navigation (VBN) has been shown useful as a guide to navigate the bronchoscope. However, indications for utilizing VBN and EBUS-GS are not always clear. To clarify indications for a bronchoscopic examination using VBN and EBUS-GS, we evaluated factors that predict the diagnostic yield of a transbronchial biopsy (TBB) procedure for peripheral lung cancer (PLC) lesions. We retrospectively reviewed the charts of 194 patients with 201 PLC lesions (≤3cm mean diameter), and analyzed the association of diagnostic yield of TBB with [(18)F]-fluoro-2-deoxy-d-glucose ((18)F-FDG) positron emission tomography and chest computed tomography (CT) findings. The diagnostic yield of TBB using VBN and EBUS-GS was 66.7%. High maximum standardized uptake value (SUVmax), positive bronchus sign, and ground-glass opacity component shown on CT were all significant predictors of diagnostic yield, while multivariate analysis showed only high (18)F-FDG uptake (SUVmax ≥2.8) and positive bronchus sign as significant predictors. Diagnostic yield was higher for PLC lesions with high (18)F-FDG uptake (SUVmax ≥2.8) and positive bronchus sign (84.6%) than for those with SUVmax <2.8 and negative bronchus sign (33.3%). High (18)F-FDG uptake was also correlated with tumor invasiveness. High (18)F-FDG uptake predicted the diagnostic yield of TBB using VBN and EBUS-GS for PLC lesions. (18)F-FDG uptake and bronchus sign may indicate for the accurate application of bronchoscopy with those modalities for diagnosing PLC. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Henriques, Dora; Browne, Keith A; Barnett, Mark W; Parejo, Melanie; Kryger, Per; Freeman, Tom C; Muñoz, Irene; Garnery, Lionel; Highet, Fiona; Jonhston, J Spencer; McCormack, Grace P; Pinto, M Alice
2018-06-04
The natural distribution of the honeybee (Apis mellifera L.) has been changed by humans in recent decades to such an extent that the formerly widest-spread European subspecies, Apis mellifera mellifera, is threatened by extinction through introgression from highly divergent commercial strains in large tracts of its range. Conservation efforts for A. m. mellifera are underway in multiple European countries requiring reliable and cost-efficient molecular tools to identify purebred colonies. Here, we developed four ancestry-informative SNP assays for high sample throughput genotyping using the iPLEX Mass Array system. Our customized assays were tested on DNA from individual and pooled, haploid and diploid honeybee samples extracted from different tissues using a diverse range of protocols. The assays had a high genotyping success rate and yielded accurate genotypes. Performance assessed against whole-genome data showed that individual assays behaved well, although the most accurate introgression estimates were obtained for the four assays combined (117 SNPs). The best compromise between accuracy and genotyping costs was achieved when combining two assays (62 SNPs). We provide a ready-to-use cost-effective tool for accurate molecular identification and estimation of introgression levels to more effectively monitor and manage A. m. mellifera conservatories.
A fast numerical scheme for causal relativistic hydrodynamics with dissipation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro
2011-08-01
Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less
Study of muon-induced neutron production using accelerator muon beam at CERN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, Y.; Lin, C. J.; Ochoa-Ricoux, J. P.
2015-08-17
Cosmogenic muon-induced neutrons are one of the most problematic backgrounds for various underground experiments for rare event searches. In order to accurately understand such backgrounds, experimental data with high-statistics and well-controlled systematics is essential. We performed a test experiment to measure muon-induced neutron production yield and energy spectrum using a high-energy accelerator muon beam at CERN. We successfully observed neutrons from 160 GeV/c muon interaction on lead, and measured kinetic energy distributions for various production angles. Works towards evaluation of absolute neutron production yield is underway. This work also demonstrates that the setup is feasible for a future large-scale experimentmore » for more comprehensive study of muon-induced neutron production.« less
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Grafton, S. B.; Lutze, F. H.
1981-01-01
Dynamic stability derivatives are evaluated on the basis of rolling-flow, curved-flow and snaking tests. Attention is given to the hardware associated with curved-flow, rolling-flow and oscillatory pure-yawing wind-tunnel tests. It is found that the snaking technique, when combined with linear- and forced-oscillation methods, yields an important method for evaluating beta derivatives for current configurations at high angles of attack. Since the rolling flow model is fixed during testing, forced oscillations may be imparted to the model, permitting the measurement of damping and cross-derivatives. These results, when coupled with basic rolling-flow or rotary-balance data, yield a highly accurate mathematical model for studies of incipient spin and spin entry.
Pholwat, Suporn; Liu, Jie; Stroup, Suzanne; Gratz, Jean; Banu, Sayera; Rahman, S M Mazidur; Ferdous, Sara Sabrina; Foongladda, Suporn; Boonlert, Duangjai; Ogarkov, Oleg; Zhdanova, Svetlana; Kibiki, Gibson; Heysell, Scott; Houpt, Eric
2015-02-24
Genotypic methods for drug susceptibility testing of Mycobacterium tuberculosis are desirable to speed the diagnosis and proper therapy of tuberculosis (TB). However, the numbers of genes and polymorphisms implicated in resistance have proliferated, challenging diagnostic design. We developed a microfluidic TaqMan array card (TAC) that utilizes both sequence-specific probes and high-resolution melt analysis (HRM), providing two layers of detection of mutations. Twenty-seven primer pairs and 40 probes were designed to interrogate 3,200 base pairs of critical regions of the inhA, katG, rpoB, embB, rpsL, rrs, eis, gyrA, gyrB, and pncA genes. The method was evaluated on 230 clinical M. tuberculosis isolates from around the world, and it yielded 96.1% accuracy (2,431/2,530) in comparison to that of Sanger sequencing and 87% accuracy in comparison to that of the slow culture-based susceptibility testing. This TAC-HRM method integrates assays for 10 genes to yield fast, comprehensive, and accurate drug susceptibility results for the 9 major antibiotics used to treat TB and could be deployed to improve treatment outcomes. Multidrug-resistant tuberculosis threatens global tuberculosis control efforts. Optimal therapy utilizes susceptibility test results to guide individualized treatment regimens; however, the susceptibility testing methods in use are technically difficult and slow. We developed an integrated TaqMan array card method with high-resolution melt analysis that interrogates 10 genes to yield a fast, comprehensive, and accurate drug susceptibility result for the 9 major antituberculosis antibiotics. Copyright © 2015 Pholwat et al.
How does spatial and temporal resolution of vegetation index impact crop yield estimation?
USDA-ARS?s Scientific Manuscript database
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...
A Highly Accurate Face Recognition System Using Filtering Correlation
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko
2007-09-01
The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.
Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC
NASA Astrophysics Data System (ADS)
Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.
2014-11-01
Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.
MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis
NASA Technical Reports Server (NTRS)
McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.
2010-01-01
Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.
Bespamyatnov, Igor O; Rowan, William L; Granetz, Robert S
2008-10-01
Charge exchange recombination spectroscopy on Alcator C-Mod relies on the use of the diagnostic neutral beam injector as a source of neutral particles which penetrate deep into the plasma. It employs the emission resulting from the interaction of the beam atoms with fully ionized impurity ions. To interpret the emission from a given point in the plasma as the density of emitting impurity ions, the density of beam atoms must be known. Here, an analysis of beam propagation is described which yields the beam density profile throughout the beam trajectory from the neutral beam injector to the core of the plasma. The analysis includes the effects of beam formation, attenuation in the neutral gas surrounding the plasma, and attenuation in the plasma. In the course of this work, a numerical simulation and an analytical approximation for beam divergence are developed. The description is made sufficiently compact to yield accurate results in a time consistent with between-shot analysis.
Schell, Daniel J; Dowe, Nancy; Chapeaux, Alexandre; Nelson, Robert S; Jennings, Edward W
2016-04-01
Accurate mass balance and conversion data from integrated operation is needed to fully elucidate the economics of biofuel production processes. This study explored integrated conversion of corn stover to ethanol and highlights techniques for accurate yield calculations. Acid pretreated corn stover (PCS) produced in a pilot-scale reactor was enzymatically hydrolyzed and the resulting sugars were fermented to ethanol by the glucose-xylose fermenting bacteria, Zymomonas mobilis 8b. The calculations presented here account for high solids operation and oligomeric sugars produced during pretreatment, enzymatic hydrolysis, and fermentation, which, if not accounted for, leads to overestimating ethanol yields. The calculations are illustrated for enzymatic hydrolysis and fermentation of PCS at 17.5% and 20.0% total solids achieving 80.1% and 77.9% conversion of cellulose and xylan to ethanol and ethanol titers of 63g/L and 69g/L, respectively. These procedures will be employed in the future and the resulting information used for techno-economic analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Melon yield prediction using small unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Zhao, Tiebiao; Wang, Zhongdao; Yang, Qi; Chen, YangQuan
2017-05-01
Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. CNN-based object detection framework-Faster R-CNN is applied in the melon classification. Our results showed that sUAS plus CNNs were able to detect melons accurately in the late harvest season.
Gillette, William K; Esposito, Dominic; Abreu Blanco, Maria; Alexander, Patrick; Bindu, Lakshman; Bittner, Cammi; Chertov, Oleg; Frank, Peter H; Grose, Carissa; Jones, Jane E; Meng, Zhaojing; Perkins, Shelley; Van, Que; Ghirlando, Rodolfo; Fivash, Matthew; Nissley, Dwight V; McCormick, Frank; Holderfield, Matthew; Stephen, Andrew G
2015-11-02
Prenylated proteins play key roles in several human diseases including cancer, atherosclerosis and Alzheimer's disease. KRAS4b, which is frequently mutated in pancreatic, colon and lung cancers, is processed by farnesylation, proteolytic cleavage and carboxymethylation at the C-terminus. Plasma membrane localization of KRAS4b requires this processing as does KRAS4b-dependent RAF kinase activation. Previous attempts to produce modified KRAS have relied on protein engineering approaches or in vitro farnesylation of bacterially expressed KRAS protein. The proteins produced by these methods do not accurately replicate the mature KRAS protein found in mammalian cells and the protein yield is typically low. We describe a protocol that yields 5-10 mg/L highly purified, farnesylated, and methylated KRAS4b from insect cells. Farnesylated and methylated KRAS4b is fully active in hydrolyzing GTP, binds RAF-RBD on lipid Nanodiscs and interacts with the known farnesyl-binding protein PDEδ.
Gillette, William K.; Esposito, Dominic; Abreu Blanco, Maria; Alexander, Patrick; Bindu, Lakshman; Bittner, Cammi; Chertov, Oleg; Frank, Peter H.; Grose, Carissa; Jones, Jane E.; Meng, Zhaojing; Perkins, Shelley; Van, Que; Ghirlando, Rodolfo; Fivash, Matthew; Nissley, Dwight V.; McCormick, Frank; Holderfield, Matthew; Stephen, Andrew G.
2015-01-01
Prenylated proteins play key roles in several human diseases including cancer, atherosclerosis and Alzheimer’s disease. KRAS4b, which is frequently mutated in pancreatic, colon and lung cancers, is processed by farnesylation, proteolytic cleavage and carboxymethylation at the C-terminus. Plasma membrane localization of KRAS4b requires this processing as does KRAS4b-dependent RAF kinase activation. Previous attempts to produce modified KRAS have relied on protein engineering approaches or in vitro farnesylation of bacterially expressed KRAS protein. The proteins produced by these methods do not accurately replicate the mature KRAS protein found in mammalian cells and the protein yield is typically low. We describe a protocol that yields 5–10 mg/L highly purified, farnesylated, and methylated KRAS4b from insect cells. Farnesylated and methylated KRAS4b is fully active in hydrolyzing GTP, binds RAF-RBD on lipid Nanodiscs and interacts with the known farnesyl-binding protein PDEδ. PMID:26522388
NASA Astrophysics Data System (ADS)
Park, Dong-Kiu; Kim, Hyun-Sok; Seo, Moo-Young; Ju, Jae-Wuk; Kim, Young-Sik; Shahrjerdy, Mir; van Leest, Arno; Soco, Aileen; Miceli, Giacomo; Massier, Jennifer; McNamara, Elliott; Hinnen, Paul; Böcker, Paul; Oh, Nang-Lyeom; Jung, Sang-Hoon; Chai, Yvon; Lee, Jun-Hyung
2018-03-01
This paper demonstrates the improvement using the YieldStar S-1250D small spot, high-NA, after-etch overlay in-device measurements in a DRAM HVM environment. It will be demonstrated that In-device metrology (IDM) captures after-etch device fingerprints more accurately compared to the industry-standard CDSEM. Also, IDM measurements (acquiring both CD and overlay) can be executed significantly faster increasing the wafer sampling density that is possible within a realistic metrology budget. The improvements to both speed and accuracy open the possibility of extended modeling and correction capabilities for control. The proof-book data of this paper shows a 36% improvement of device overlay after switching to control in a DRAM HVM environment using indevice metrology.
Hartveit, Espen; Veruki, Margaret Lin
2010-03-15
Accurate measurement of the junctional conductance (G(j)) between electrically coupled cells can provide important information about the functional properties of coupling. With the development of tight-seal, whole-cell recording, it became possible to use dual, single-electrode voltage-clamp recording from pairs of small cells to measure G(j). Experiments that require reduced perturbation of the intracellular environment can be performed with high-resistance pipettes or the perforated-patch technique, but an accompanying increase in series resistance (R(s)) compromises voltage-clamp control and reduces the accuracy of G(j) measurements. Here, we present a detailed analysis of methodologies available for accurate determination of steady-state G(j) and related parameters under conditions of high R(s), using continuous or discontinuous single-electrode voltage-clamp (CSEVC or DSEVC) amplifiers to quantify the parameters of different equivalent electrical circuit model cells. Both types of amplifiers can provide accurate measurements of G(j), with errors less than 5% for a wide range of R(s) and G(j) values. However, CSEVC amplifiers need to be combined with R(s)-compensation or mathematical correction for the effects of nonzero R(s) and finite membrane resistance (R(m)). R(s)-compensation is difficult for higher values of R(s) and leads to instability that can damage the recorded cells. Mathematical correction for R(s) and R(m) yields highly accurate results, but depends on accurate estimates of R(s) throughout an experiment. DSEVC amplifiers display very accurate measurements over a larger range of R(s) values than CSEVC amplifiers and have the advantage that knowledge of R(s) is unnecessary, suggesting that they are preferable for long-duration experiments and/or recordings with high R(s). Copyright (c) 2009 Elsevier B.V. All rights reserved.
Finding the bottom and using it
Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.
2014-01-01
Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346
Ross, Charles W; Simonsick, William J; Bogusky, Michael J; Celikay, Recep W; Guare, James P; Newton, Randall C
2016-06-28
Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI), sheath flow electrospray ionization (ESI) Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) and high-field nuclear magnetic resonance (NMR) analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.
Estimating rice yield from MODIS-Landsat fusion data in Taiwan
NASA Astrophysics Data System (ADS)
Chen, C. R.; Chen, C. F.; Nguyen, S. T.
2017-12-01
Rice production monitoring with remote sensing is an important activity in Taiwan due to official initiatives. Yield estimation is a challenge in Taiwan because rice fields are small and fragmental. High spatiotemporal satellite data providing phenological information of rice crops is thus required for this monitoring purpose. This research aims to develop data fusion approaches to integrate daily Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat data for rice yield estimation in Taiwan. In this study, the low-resolution MODIS LST and emissivity data are used as reference data sources to obtain the high-resolution LST from Landsat data using the mixed-pixel analysis technique, and the time-series EVI data were derived the fusion of MODIS and Landsat spectral band data using STARFM method. The LST and EVI simulated results showed the close agreement between the LST and EVI obtained by the proposed methods with the reference data. The rice-yield model was established using EVI and LST data based on information of rice crop phenology collected from 371 ground survey sites across the country in 2014. The results achieved from the fusion datasets compared with the reference data indicated the close relationship between the two datasets with the correlation coefficient (R2) of 0.75 and root mean square error (RMSE) of 338.7 kgs, which were more accurate than those using the coarse-resolution MODIS LST data (R2 = 0.71 and RMSE = 623.82 kgs). For the comparison of total production, 64 towns located in the west part of Taiwan were used. The results also confirmed that the model using fusion datasets produced more accurate results (R2 = 0.95 and RMSE = 1,243 tons) than that using the course-resolution MODIS data (R2 = 0.91 and RMSE = 1,749 tons). This study demonstrates the application of MODIS-Landsat fusion data for rice yield estimation at the township level in Taiwan. The results obtained from the methods used in this study could be useful to policymakers; and thus, the methods can be transferable to other regions in the world for rice yield estimation.
Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Suhwan; Kim, Min-Cheol; Sim, Eunji
2017-05-01
All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
High-wafer-yield, high-performance vertical cavity surface-emitting lasers
NASA Astrophysics Data System (ADS)
Li, Gabriel S.; Yuen, Wupen; Lim, Sui F.; Chang-Hasnain, Constance J.
1996-04-01
Vertical cavity surface emitting lasers (VCSELs) with very low threshold current and voltage of 340 (mu) A and 1.5 V is achieved. The molecular beam epitaxially grown wafers are grown with a highly accurate, low cost and versatile pre-growth calibration technique. One- hundred percent VCSEL wafer yield is obtained. Low threshold current is achieved with a native oxide confined structure with excellent current confinement. Single transverse mode with stable, predetermined polarization direction up to 18 times threshold is also achieved, due to stable index guiding provided by the structure. This is the highest value reported to data for VCSELs. We have established that p-contact annealing in these devices is crucial for low voltage operation, contrary to the general belief. Uniform doping in the mirrors also appears not to be inferior to complicated doping engineering. With these design rules, very low threshold voltage VCSELs are achieved with very simple growth and fabrication steps.
Tang, Yuting; Zhang, Yue; Rosenberg, Julian N.; ...
2016-11-08
Microalgae are a valuable source of lipid feedstocks for biodiesel and valuable omega-3 fatty acids. Nannochloropsis gaditana has emerged as a promising producer of eicosapentaenoic acid (EPA) due to its fast growth rate and high EPA content. In the present study, the fatty acid profile of Nannochloropsis gaditana was found to be naturally high in EPA and devoid of docosahexaenoic acid (DHA), thereby providing an opportunity to maximize the efficacy of EPA production. Using an optimized one-step in situ transesterification method (methanol:biomass = 90 mL/g; HCl 5% by vol.; 70 °C; 1.5 h), the maximum fatty acid methyl ester (FAME)more » yield of Nannochloropsis gaditana cultivated under rich condition was quantified as 10.04% ± 0.08% by weight with EPA-yields as high as 4.02% ± 0.17% based on dry biomass. The total FAME and EPA yields were 1.58- and 1.23-fold higher separately than that obtained using conventional two-step method (solvent system: methanol and chloroform). Furthermore, this one-step in situ method provides a fast and simple method to measure fatty acid methyl ester (FAME) yields and could serve as a promising method to generate eicosapentaenoic acid methyl ester from microalgae.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Yuting; Zhang, Yue; Rosenberg, Julian N.
Microalgae are a valuable source of lipid feedstocks for biodiesel and valuable omega-3 fatty acids. Nannochloropsis gaditana has emerged as a promising producer of eicosapentaenoic acid (EPA) due to its fast growth rate and high EPA content. In the present study, the fatty acid profile of Nannochloropsis gaditana was found to be naturally high in EPA and devoid of docosahexaenoic acid (DHA), thereby providing an opportunity to maximize the efficacy of EPA production. Using an optimized one-step in situ transesterification method (methanol:biomass = 90 mL/g; HCl 5% by vol.; 70 °C; 1.5 h), the maximum fatty acid methyl ester (FAME)more » yield of Nannochloropsis gaditana cultivated under rich condition was quantified as 10.04% ± 0.08% by weight with EPA-yields as high as 4.02% ± 0.17% based on dry biomass. The total FAME and EPA yields were 1.58- and 1.23-fold higher separately than that obtained using conventional two-step method (solvent system: methanol and chloroform). Furthermore, this one-step in situ method provides a fast and simple method to measure fatty acid methyl ester (FAME) yields and could serve as a promising method to generate eicosapentaenoic acid methyl ester from microalgae.« less
Improved modified energy ratio method using a multi-window approach for accurate arrival picking
NASA Astrophysics Data System (ADS)
Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun
2017-04-01
To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.
Shackelford, S D; Wheeler, T L; Koohmaraie, M
2003-01-01
The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.
A High-Order Direct Solver for Helmholtz Equations with Neumann Boundary Conditions
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhuang, Yu
1997-01-01
In this study, a compact finite-difference discretization is first developed for Helmholtz equations on rectangular domains. Special treatments are then introduced for Neumann and Neumann-Dirichlet boundary conditions to achieve accuracy and separability. Finally, a Fast Fourier Transform (FFT) based technique is used to yield a fast direct solver. Analytical and experimental results show this newly proposed solver is comparable to the conventional second-order elliptic solver when accuracy is not a primary concern, and is significantly faster than that of the conventional solver if a highly accurate solution is required. In addition, this newly proposed fourth order Helmholtz solver is parallel in nature. It is readily available for parallel and distributed computers. The compact scheme introduced in this study is likely extendible for sixth-order accurate algorithms and for more general elliptic equations.
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
Evaluation of Rgb-Based Vegetation Indices from Uav Imagery to Estimate Forage Yield in Grassland
NASA Astrophysics Data System (ADS)
Lussem, U.; Bolten, A.; Gnyp, M. L.; Jasper, J.; Bareth, G.
2018-04-01
Monitoring forage yield throughout the growing season is of key importance to support management decisions on grasslands/pastures. Especially on intensely managed grasslands, where nitrogen fertilizer and/or manure are applied regularly, precision agriculture applications are beneficial to support sustainable, site-specific management decisions on fertilizer treatment, grazing management and yield forecasting to mitigate potential negative impacts. To support these management decisions, timely and accurate information is needed on plant parameters (e.g. forage yield) with a high spatial and temporal resolution. However, in highly heterogeneous plant communities such as grasslands, assessing their in-field variability non-destructively to determine e.g. adequate fertilizer application still remains challenging. Especially biomass/yield estimation, as an important parameter in assessing grassland quality and quantity, is rather laborious. Forage yield (dry or fresh matter) is mostly measured manually with rising plate meters (RPM) or ultrasonic sensors (handheld or mounted on vehicles). Thus the in-field variability cannot be assessed for the entire field or only with potential disturbances. Using unmanned aerial vehicles (UAV) equipped with consumer grade RGB cameras in-field variability can be assessed by computing RGB-based vegetation indices. In this contribution we want to test and evaluate the robustness of RGB-based vegetation indices to estimate dry matter forage yield on a recently established experimental grassland site in Germany. Furthermore, the RGB-based VIs are compared to indices computed from the Yara N-Sensor. The results show a good correlation of forage yield with RGB-based VIs such as the NGRDI with R2 values of 0.62.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Evaluating the sensitivity of agricultural model performance to different climate inputs
Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.
2017-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985
Lunar Laser Ranging Science: Gravitational Physics and Lunar Interior and Geodesy
NASA Technical Reports Server (NTRS)
Williams, James G.; Turyshev, Slava G.; Boggs, Dale H.; Ratcliff, J. Todd
2004-01-01
Laser pulses fired at retroreflectors on the Moon provide very accurate ranges. Analysis yields information on Earth, Moon, and orbit. The highly accurate retroreflector positions have uncertainties less than a meter. Tides on the Moon show strong dissipation, with Q=33+/-4 at a month and a weak dependence on period. Lunar rotation depends on interior properties; a fluid core is indicated with radius approx.20% that of the Moon. Tests of relativistic gravity verify the equivalence principle to +/-1.4x10(exp -13), limit deviations from Einstein's general relativity, and show no rate for the gravitational constant G/G with uncertainty 9x10(exp -13)/yr.
Estimation of dew yield from radiative condensers by means of an energy balance model
NASA Astrophysics Data System (ADS)
Maestre-Valero, J. F.; Ragab, R.; Martínez-Alvarez, V.; Baille, A.
2012-08-01
SummaryThis paper presents an energy balance modelling approach to predict the nightly water yield and the surface temperature (Tf) of two passive radiative dew condensers (RDCs) tilted 30° from horizontal. One was fitted with a white hydrophilic polyethylene foil recommended for dew harvest and the other with a black polyethylene foil widely used in horticulture. The model was validated in south-eastern Spain by comparing the simulation outputs with field measurements of Tf and dew yield. The results indicate that the model is robust and accurate in reproducing the behaviour of the two RDCs, especially in what refers to Tf, whose estimates were very close to the observations. The results were somewhat less precise for dew yield, with a larger scatter around the 1:1 relationship. A sensitivity analysis showed that the simulated dew yield was highly sensitive to changes in relative humidity and downward longwave radiation. The proposed approach provides a useful tool to water managers for quantifying the amount of dew that could be harvested as a valuable water resource in arid, semiarid and water stressed regions.
Effects of low harmonics on tone identification in natural and vocoded speech.
Liu, Chang; Azimi, Behnam; Tahmina, Qudsia; Hu, Yi
2012-11-01
This study investigated the contribution of low-frequency harmonics to identifying Mandarin tones in natural and vocoded speech in quiet and noisy conditions. Results showed that low-frequency harmonics of natural speech led to highly accurate tone identification; however, for vocoded speech, low-frequency harmonics yielded lower tone identification than stimuli with full harmonics, except for tone 4. Analysis of the correlation between tone accuracy and the amplitude-F0 correlation index suggested that "more" speech contents (i.e., more harmonics) did not necessarily yield better tone recognition for vocoded speech, especially when the amplitude contour of the signals did not co-vary with the F0 contour.
Comparison of statistical models for analyzing wheat yield time series.
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Trujillo-Esquivel, Elías; Franco, Bernardo; Flores-Martínez, Alberto; Ponce-Noyola, Patricia; Mora-Montes, Héctor M
2016-08-02
Analysis of gene expression is a common research tool to study networks controlling gene expression, the role of genes with unknown function, and environmentally induced responses of organisms. Most of the analytical tools used to analyze gene expression rely on accurate cDNA synthesis and quantification to obtain reproducible and quantifiable results. Thus far, most commercial kits for isolation and purification of cDNA target double-stranded molecules, which do not accurately represent the abundance of transcripts. In the present report, we provide a simple and fast method to purify single-stranded cDNA, exhibiting high purity and yield. This method is based on the treatment with RNase H and RNase A after cDNA synthesis, followed by separation in silica spin-columns and ethanol precipitation. In addition, our method avoids the use of DNase I to eliminate genomic DNA from RNA preparations, which improves cDNA yield. As a case report, our method proved to be useful in the purification of single-stranded cDNA from the pathogenic fungus Sporothrix schenckii.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Growth and yield in Eucalyptus globulus
James A. Rinehart; Richard B. Standiford
1983-01-01
A study of the major Eucalyptus globulus stands throughout California conducted by Woodbridge Metcalf in 1924 provides a complete and accurate data set for generating variable site-density yield models. Two models were developed using linear regression techniques. Model I depicts a linear relationship between age and yield best used for stands between five and fifteen...
Application of a rising plate meter to estimate forage yield on dairy farms in Pennsylvania
USDA-ARS?s Scientific Manuscript database
Accurately assessing pasture forage yield is necessary for producers who want to budget feed expenses and make informed pasture management decisions. Clipping and weighing forage from a known area is a direct method to measure pasture forage yield, however it is time consuming. The rising plate mete...
Adjusting slash pine growth and yield for silvicultural treatments
Stephen R. Logan; Barry D. Shiver
2006-01-01
With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...
NASA Astrophysics Data System (ADS)
Franch, B.; Vermote, E.; Roger, J. C.; Skakun, S.; Becker-Reshef, I.; Justice, C. O.
2017-12-01
Accurate and timely crop yield forecasts are critical for making informed agricultural policies and investments, as well as increasing market efficiency and stability. In Becker-Reshef et al. (2010) and Franch et al. (2015) we developed an empirical generalized model for forecasting winter wheat yield. It is based on the relationship between the Normalized Difference Vegetation Index (NDVI) at the peak of the growing season and the Growing Degree Day (GDD) information extracted from NCEP/NCAR reanalysis data. These methods were applied to MODIS CMG data in Ukraine, the US and China with errors around 10%. However, the NDVI is saturated for yield values higher than 4 MT/ha. As a consequence, the model had to be re-calibrated in each country and the validation of the national yields showed low correlation coefficients. In this study we present a new model based on the extrapolation of the pure wheat signal (100% of wheat within the pixel) from MODIS data at 1km resolution and using the Difference Vegetation Index (DVI). The model has been applied to monitor the national yield of winter wheat in the United States and Ukraine from 2001 to 2016.
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Pathways between primary production and fisheries yields of large marine ecosystems.
Friedland, Kevin D; Stock, Charles; Drinkwater, Kenneth F; Link, Jason S; Leaf, Robert T; Shank, Burton V; Rose, Julie M; Pilskaln, Cynthia H; Fogarty, Michael J
2012-01-01
The shift in marine resource management from a compartmentalized approach of dealing with resources on a species basis to an approach based on management of spatially defined ecosystems requires an accurate accounting of energy flow. The flow of energy from primary production through the food web will ultimately limit upper trophic-level fishery yields. In this work, we examine the relationship between yield and several metrics including net primary production, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production. We also evaluate the relationship between yield and two additional rate measures that describe the export of energy from the pelagic food web, particle export flux and mesozooplankton productivity. We found primary production is a poor predictor of global fishery yields for a sample of 52 large marine ecosystems. However, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production were positively associated with yields. The latter two measures provide greater mechanistic insight into factors controlling fishery production than chlorophyll concentration alone. Particle export flux and mesozooplankton productivity were also significantly related to yield on a global basis. Collectively, our analyses suggest that factors related to the export of energy from pelagic food webs are critical to defining patterns of fishery yields. Such trophic patterns are associated with temperature and latitude and hence greater yields are associated with colder, high latitude ecosystems.
Pathways between Primary Production and Fisheries Yields of Large Marine Ecosystems
Friedland, Kevin D.; Stock, Charles; Drinkwater, Kenneth F.; Link, Jason S.; Leaf, Robert T.; Shank, Burton V.; Rose, Julie M.; Pilskaln, Cynthia H.; Fogarty, Michael J.
2012-01-01
The shift in marine resource management from a compartmentalized approach of dealing with resources on a species basis to an approach based on management of spatially defined ecosystems requires an accurate accounting of energy flow. The flow of energy from primary production through the food web will ultimately limit upper trophic-level fishery yields. In this work, we examine the relationship between yield and several metrics including net primary production, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production. We also evaluate the relationship between yield and two additional rate measures that describe the export of energy from the pelagic food web, particle export flux and mesozooplankton productivity. We found primary production is a poor predictor of global fishery yields for a sample of 52 large marine ecosystems. However, chlorophyll concentration, particle-export ratio, and the ratio of secondary to primary production were positively associated with yields. The latter two measures provide greater mechanistic insight into factors controlling fishery production than chlorophyll concentration alone. Particle export flux and mesozooplankton productivity were also significantly related to yield on a global basis. Collectively, our analyses suggest that factors related to the export of energy from pelagic food webs are critical to defining patterns of fishery yields. Such trophic patterns are associated with temperature and latitude and hence greater yields are associated with colder, high latitude ecosystems. PMID:22276100
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Corey; Holmes, Joshua; Nibler, Joseph W.
2013-05-16
Combined high-resolution spectroscopic, electron-diffraction, and quantum theoretical methods are particularly advantageous for small molecules of high symmetry and can yield accurate structures that reveal subtle effects of electron delocalization on molecular bonds. The smallest of the radialene compounds, trimethylenecyclopropane, [3]-radialene, has been synthesized and examined in the gas phase by these methods. The first high-resolution infrared spectra have been obtained for this molecule of D3h symmetry, leading to an accurate B0 rotational constant value of 0.1378629(8) cm-1, within 0.5% of the value obtained from electronic structure calculations (density functional theory (DFT) B3LYP/cc-pVTZ). This result is employed in an analysis ofmore » electron-diffraction data to obtain the rz bond lengths (in Å): C-H = 1.072 (17), C-C = 1.437 (4), and C=C = 1.330 (4). The analysis does not lead to an accurate value of the HCH angle; however, from comparisons of theoretical and experimental angles for similar compounds, the theoretical prediction of 117.5° is believed to be reliable to within 2°. The effect of electron delocalization in radialene is to reduce the single C-C bond length by 0.07 Å compared to that in cyclopropane.« less
From hemodynamic towards cardiomechanic sensors in implantable devices
NASA Astrophysics Data System (ADS)
Ferek-Petric, Bozidar
2013-06-01
Sensor could significantly improve the cardiac electrotherapy. It has to provide long-term stabile signal not impeding the device longevity and lead reliability. It may not introduce special implantation and adjustment procedures. Hemodynamic sensors based on the blood flow velocity and cardiomechanic sensors based on the lead bending measurement are disclosed. These sensors have a broad clinical utility. Triboelectric and high-frequency lead bending sensors yield accurate and stable signals whereby functioning with every cardiac lead. Moreover, high frequency measurement avoids use of any kind of special hardware mounted on the cardiac lead.
High Resolution UV Emission Spectroscopy of Molecules Excited by Electron Impact
NASA Technical Reports Server (NTRS)
James, G. K.; Ajello, J. M.; Beegle, L.; Ciocca, M.; Dziczek, D.; Kanik, I.; Noren, C.; Jonin, C.; Hansen, D.
1999-01-01
Photodissociation via discrete line absorption into predissociating Rydberg and valence states is the dominant destruction mechanism of CO and other molecules in the interstellar medium and molecular clouds. Accurate values for the rovibronic oscillator strengths of these transitions and predissociation yields of the excited states are required for input into the photochemical models that attempt to reproduce observed abundances. We report here on our latest experimental results of the electron collisional properties of CO and N2 obtained using the 3-meter high resolution single-scattering spectroscopic facility at JPL.
Toward Robust and Efficient Climate Downscaling for Wind Energy
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Rife, D.; Pinto, J. O.; Monaghan, A. J.; Davis, C. A.
2011-12-01
This presentation describes a more accurate and economical (less time, money and effort) wind resource assessment technique for the renewable energy industry, that incorporates innovative statistical techniques and new global mesoscale reanalyzes. The technique judiciously selects a collection of "case days" that accurately represent the full range of wind conditions observed at a given site over a 10-year period, in order to estimate the long-term energy yield. We will demonstrate that this new technique provides a very accurate and statistically reliable estimate of the 10-year record of the wind resource by intelligently choosing a sample of ±120 case days. This means that the expense of downscaling to quantify the wind resource at a prospective wind farm can be cut by two thirds from the current industry practice of downscaling a randomly chosen 365-day sample to represent winds over a "typical" year. This new estimate of the long-term energy yield at a prospective wind farm also has far less statistical uncertainty than the current industry standard approach. This key finding has the potential to reduce significantly market barriers to both onshore and offshore wind farm development, since insurers and financiers charge prohibitive premiums on investments that are deemed to be high risk. Lower uncertainty directly translates to lower perceived risk, and therefore far more attractive financing terms could be offered to wind farm developers who employ this new technique.
Sankey, Joel B.; McVay, Jason C.; Kreitler, Jason R.; Hawbaker, Todd J.; Vaillant, Nicole; Lowe, Scott
2015-01-01
Increased sedimentation following wildland fire can negatively impact water supply and water quality. Understanding how changing fire frequency, extent, and location will affect watersheds and the ecosystem services they supply to communities is of great societal importance in the western USA and throughout the world. In this work we assess the utility of the InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) Sediment Retention Model to accurately characterize erosion and sedimentation of burned watersheds. InVEST was developed by the Natural Capital Project at Stanford University (Tallis et al., 2014) and is a suite of GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., USLE – Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. In this study, we evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured postfire sediment yields available for many watersheds throughout the western USA from an existing, published large database. We show that the model can be parameterized in a relatively simple fashion to predict post-fire sediment yield with accuracy. Our ultimate goal is to use the model to accurately predict variability in post-fire sediment yield at a watershed scale as a function of future wildfire conditions.
Stability of compressible Taylor-Couette flow
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Chow, Chuen-Yen
1991-01-01
Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.
Systematic characterization of maturation time of fluorescent proteins in living cells
Balleza, Enrique; Kim, J. Mark; Cluzel, Philippe
2017-01-01
Slow maturation time of fluorescent proteins limits accurate measurement of rapid gene expression dynamics and effectively reduces fluorescence signal in growing cells. We used high-precision time-lapse microscopy to characterize, at two different temperatures in E. coli, the maturation kinetics of 50 FPs that span the visible spectrum. We identified fast-maturing FPs that yield the highest signal-to-noise ratio and temporal resolution in individual growing cells. PMID:29320486
Simulated yields for managed northern hardwood stands
Dale S. Solomon; William B. Leak; William B. Leak
1986-01-01
Board-foot and cubic-foot yields developed with the forest growth model SlMTlM are presented for northern hardwood stands grown with and without management. SIMTIM has been modified to include more accurate growth rates by species, a new stocking chart, and yields that reflect species values and quality classes. Treatments range from no thinning to intensive quality...
An evaluation of the accuracy and speed of metagenome analysis tools
Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.
2016-01-01
Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510
Optimal Design of Experiments by Combining Coarse and Fine Measurements
NASA Astrophysics Data System (ADS)
Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.
2017-11-01
In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.
Lunar mineral feedstocks from rocks and soils: X-ray digital imaging in resource evaluation
NASA Technical Reports Server (NTRS)
Chambers, John G.; Patchen, Allan; Taylor, Lawrence A.; Higgins, Stefan J.; Mckay, David S.
1994-01-01
The rocks and soils of the Moon provide raw materials essential to the successful establishment of a lunar base. Efficient exploitation of these resources requires accurate characterization of mineral abundances, sizes/shapes, and association of 'ore' and 'gangue' phases, as well as the technology to generate high-yield/high-grade feedstocks. Only recently have x-ray mapping and digital imaging techniques been applied to lunar resource evaluation. The topics covered include inherent differences between lunar basalts and soils and quantitative comparison of rock-derived and soil-derived ilmenite concentrates. It is concluded that x-ray digital-imaging characterization of lunar raw materials provides a quantitative comparison that is unattainable by traditional petrographic techniques. These data are necessary for accurately determining mineral distributions of soil and crushed rock material. Application of these techniques will provide an important link to choosing the best raw material for mineral beneficiation.
An Overview on Measurement-While-Drilling Technique and its Scope in Excavation Industry
NASA Astrophysics Data System (ADS)
Rai, P.; Schunesson, H.; Lindqvist, P.-A.; Kumar, U.
2015-04-01
Measurement-while-drilling (MWD) aims at collecting accurate, speedy and high resolution information from the production blast hole drills with a target of characterization of highly variable rock masses encountered in sub-surface excavations. The essence of the technique rests on combining the physical drill variables in a manner to yield a fairly accurate description of the sub-surface rock mass much ahead of following downstream operations. In this light, the current paper presents an overview of the MWD by explaining the technique and its set-up, the existing drill-rock mass relationships and numerous on-going researches highlighting the real-time applications. Although the paper acknowledges the importance of concepts of specific energy, rock quality index and a couple of other indices and techniques for rock mass characterization, it must be distinctly borne in mind that the technique of MWD is highly site-specific, which entails derivation of site-specific calibration with utmost care.
[Effects of Chemical Fertilizers and Organic Fertilizer on Yield of Ligusticum chuanxiong Rhizome].
Liang, Qin; Chen, Xing-fu; Li, Yan; Zhang, Jun; Meng, Jie; Peng, Shi-ming
2015-10-01
To study the effects of different N, P, K and organic fertilizer (OF) on yield of Ligusticum chuanxiong rhizome, in order to provide the theoretical foundation for the establishment of standardization cultivation techniques. The field plot experiments used Ligusticum chuanxiong rhizome which planted in Pengshan as material, and were studied by the four factors and five levels with quadratic regression rotation-orthogonal combination design. According to the data obtained, a function model which could predict the fertilization and yield of Ligusticum chuanxiong rhizome accurately was established. The model analysis showed that the yields of Ligusticum chuanxiong rhizome were significantly influenced by the N, P, K and OF applications. Among these factors, the order of increase rates by the fertilizers was K > OF > N > P; The effect of interaction between N and K, N and OF, K and OF on the yield of Ligusticum chuanxiong rhizome were significantly different. High levels of N and P, N and organic fertilizer, K and organic fertilizer were conducive to improve the yield of Ligusticum chuanxiong rhizome. The results showed that the optimal fertilizer application rates of N was 148.20 - 172.28 kg/hm2, P was 511.92 - 599.40 kg/hm2, K was 249.70 - 282.37 kg/hm2, and OF was 940.00 - 1 104.00 kg/hm2. N, P, K and OF obviously affect the yield of Ligusticum chuanxiong rhizome. K and OF can significantly increase the yield of Ligusticum chuanxiong rhizome. Thus it is suggested that properly high mount of K and OF and appropriate increasing N are two favorable factors for cultivating Ligusticum chuanxiong.
Predicting cotton yield of small field plots in a cotton breeding program using UAV imagery data
NASA Astrophysics Data System (ADS)
Maja, Joe Mari J.; Campbell, Todd; Camargo Neto, Joao; Astillo, Philip
2016-05-01
One of the major criteria used for advancing experimental lines in a breeding program is yield performance. Obtaining yield performance data requires machine picking each plot with a cotton picker, modified to weigh individual plots. Harvesting thousands of small field plots requires a great deal of time and resources. The efficiency of cotton breeding could be increased significantly while the cost could be decreased with the availability of accurate methods to predict yield performance. This work is investigating the feasibility of using an image processing technique using a commercial off-the-shelf (COTS) camera mounted on a small Unmanned Aerial Vehicle (sUAV) to collect normal RGB images in predicting cotton yield on small plot. An orthonormal image was generated from multiple images and used to process multiple, segmented plots. A Gaussian blur was used to eliminate the high frequency component of the images, which corresponds to the cotton pixels, and used image subtraction technique to generate high frequency pixel images. The cotton pixels were then separated using k-means cluster with 5 classes. Based on the current work, the calculated percentage cotton area was computed using the generated high frequency image (cotton pixels) divided by the total area of the plot. Preliminary results showed (five flights, 3 altitudes) that cotton cover on multiple pre-selected 227 sq. m. plots produce an average of 8% which translate to approximately 22.3 kgs. of cotton. The yield prediction equation generated from the test site was then use on a separate validation site and produced a prediction error of less than 10%. In summary, the results indicate that a COTS camera with an appropriate image processing technique can produce results that are comparable to expensive sensors.
Testing and Analysis of NEXT Ion Engine Discharge Cathode Assembly Wear
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Foster, John E.; Soulas, George C.; Nakles, Michael
2003-01-01
Experimental and analytical investigations were conducted to predict the wear of the discharge cathode keeper in the NASA Evolutionary Xenon Thruster. The ion current to the keeper was found to be highly dependent upon the beam current, and the average beam current density was nearly identical to that of the NSTAR thruster for comparable beam current density. The ion current distribution was highly peaked toward the keeper orifice. A deterministic wear assessment predicted keeper orifice erosion to the same diameter as the cathode tube after processing 375 kg of xenon. A rough estimate of discharge cathode assembly life limit due to sputtering indicated that the current design exceeds the qualification goal of 405 kg. Probabilistic wear analysis showed that the plasma potential and the sputter yield contributed most to the uncertainty in the wear assessment. It was recommended that fundamental experimental and modeling efforts focus on accurately describing the plasma potential and the sputtering yield.
Simulation of FIB-SEM images for analysis of porous microstructures.
Prill, Torben; Schladitz, Katja
2013-01-01
Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.
An Anisotropic Hardening Model for Springback Prediction
NASA Astrophysics Data System (ADS)
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
Responsive behavior of regenerated cellulose in hydrolysis under microwave radiation.
Ni, Jinping; Na, Haining; She, Zhen; Wang, Jinggang; Xue, Wenwen; Zhu, Jin
2014-09-01
This work studied the responsive behavior of regenerated cellulose (RC) in hydrolysis under microwave radiation. Four types of RC with different crystallinity (Cr) and degree of polymerization (DP) are produced to evaluate the reactivity of RC by step-by-step hydrolysis. Results show Cr is the key factor to affect the reactivity of RCs. With hydrolysis of amorphous region and the formation of recrystallization, the Cr of RC reaches a high value and thus weakens the reactivity. As a result, the increment of cellulose conversion and sugar yield gradually reduces. Decrease of the DP of RC is helpful to increase the speed at the onset of hydrolysis and produce high sugar yield. But, there is no direct influence with the reactivity of RC to prolong the time of pretreatment. This research provides an accurate understanding to guide the RC preparation for sugar formation with relative high efficiency under mild reaction conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison of Statistical Models for Analyzing Wheat Yield Time Series
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280
Baskar, Gurunathan; Sathya, Shree Rajesh K
2011-01-01
Statistical and evolutionary optimization of media composition was employed for the production of medicinal exopolysaccharide (EPS) by Lingzhi or Reishi medicinal mushroom Ganoderma lucidium MTCC 1039 using soya bean meal flour as low-cost substrate. Soya bean meal flour, ammonium chloride, glucose, and pH were identified as the most important variables for EPS yield using the two-level Plackett-Burman design and further optimized using the central composite design (CCD) and the artificial neural network (ANN)-linked genetic algorithm (GA). The high value of coefficient of determination of ANN (R² = 0.982) indicates that the ANN model was more accurate than the second-order polynomial model of CCD (R² = 0.91) for representing the effect of media composition on EPS yield. The predicted optimum media composition using ANN-linked GA was soybean meal flour 2.98%, glucose 3.26%, ammonium chloride 0.25%, and initial pH 7.5 for the maximum predicted EPS yield of 1005.55 mg/L. The experimental EPS yield obtained using the predicted optimum media composition was 1012.36 mg/L, which validates the high degree of accuracy of evolutionary optimization for enhanced production of EPS by submerged fermentation of G. lucidium.
Representing winter wheat in the Community Land Model (version 4.5)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less
Representing winter wheat in the Community Land Model (version 4.5)
NASA Astrophysics Data System (ADS)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.
2017-05-01
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.
Representing winter wheat in the Community Land Model (version 4.5)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; ...
2017-05-05
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less
Real-time yield estimation based on deep learning
NASA Astrophysics Data System (ADS)
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Stojadinovic, S; Jiang, S
Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similaritymore » metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0.078, SSIM 0.999±0.001, and HD 5.926±6.141mm. Conclusion: The developed automatic segmentation strategy, which yields accurate brain tumor delineation in evaluation cases, is promising for its application in SRS treatment planning.« less
Second Preliminary Report on X-ray Yields from OMEGA II Targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K B; May, M J; MacLaren, S A
2006-08-28
We present details about X-ray yields measured with LLNL and SNL diagnostics in soft and moderately hard X-ray bands from laser-driven, doped-aerogel targets shot on 07/14/06 during the OMEGA II test series. Yields accurate to {+-}25% in the 5-15 keV band are measured with Livermore's HENWAY spectrometer. Yields in the sub-keV to 3.2 keV band are measured with LLNL's DANTE diagnostic, the DANTE yields may be 35-40% too large. SNL ran a PCD-based diagnostic that also measured X-ray yields in the spectral region above 4 keV, and also down to the nearly sub-keV range. The PCD and HENWAY and DANTEmore » numbers are compared. The time histories of the X-ray signals are measured with LLNL's H11 PCD, and from two SNL PCDs with comparable filtering. There is a persistent disagreement between the H11 PCD and SNL PCD measured FWHM, which is shown not to be due to analysis techniques. The recommended X-ray waveform is that from the SNL PCD p66k10, which was recorded on a fast, high-bandwidth TDS 6804 oscilloscope, and which are not plotted here.« less
Modeling central metabolism and energy biosynthesis across microbial life
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal; ...
2016-08-08
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life.
Edirisinghe, Janaka N; Weisenhorn, Pamela; Conrad, Neal; Xia, Fangfang; Overbeek, Ross; Stevens, Rick L; Henry, Christopher S
2016-08-08
Automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. To overcome this challenge, we developed methods and tools ( http://coremodels.mcs.anl.gov ) to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of model organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. We predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
Chen, Ya-Meng; Zhou, Yang; Zhao, Qing; Zhang, Jun-Ying; Ma, Ju-Ping; Xuan, Tong-Tong; Guo, Shao-Qiang; Yong, Zi-Jun; Wang, Jing; Kuroiwa, Yoshihiro; Moriyoshi, Chikako; Sun, Hong-Tao
2018-05-09
All-inorganic perovskites have emerged as a new class of phosphor materials owing to their outstanding optical properties. Zero-dimensional inorganic perovskites, in particular the Cs 4 PbBr 6 -related systems, are inspiring intensive research owing to the high photoluminescence quantum yield (PLQY) and good stability. However, synthesizing such perovskites with high PLQYs through an environment-friendly, cost-effective, scalable, and high-yield approach remains challenging, and their luminescence mechanisms has been elusive. Here, we report a simple, scalable, room-temperature self-assembly strategy for the synthesis of Cs 4 PbBr 6 /CsPbBr 3 perovskite composites with near-unity PLQY (95%), high product yield (71%), and good stability using low-cost, low-toxicity chemicals as precursors. A broad range of experimental and theoretical characterizations suggest that the high-efficiency PL originates from CsPbBr 3 nanocrystals well passivated by the zero-dimensional Cs 4 PbBr 6 matrix that forms based on a dissolution-crystallization process. These findings underscore the importance in accurately identifying the phase purity of zero-dimensional perovskites by synchrotron X-ray technique to gain deep insights into the structure-property relationship. Additionally, we demonstrate that green-emitting Cs 4 PbBr 6 /CsPbBr 3 , combined with red-emitting K 2 SiF 6 :Mn 4+ , can be used for the construction of WLEDs. Our work may pave the way for the use of such composite perovskites as highly luminescent emitters in various applications such as lighting, displays, and other optoelectronic and photonic devices.
Improving Seasonal Crop Monitoring and Forecasting for Soybean and Corn in Iowa
NASA Astrophysics Data System (ADS)
Togliatti, K.; Archontoulis, S.; Dietzel, R.; VanLoocke, A.
2016-12-01
Accurately forecasting crop yield in advance of harvest could greatly benefit farmers, however few evaluations have been conducted to determine the effectiveness of forecasting methods. We tested one such method that used a combination of short-term weather forecasting from the Weather Research and Forecasting Model (WRF) to predict in season weather variables, such as, maximum and minimum temperature, precipitation and radiation at 4 different forecast lengths (2 weeks, 1 week, 3 days, and 0 days). This forecasted weather data along with the current and historic (previous 35 years) data from the Iowa Environmental Mesonet was combined to drive Agricultural Production Systems sIMulator (APSIM) simulations to forecast soybean and corn yields in 2015 and 2016. The goal of this study is to find the forecast length that reduces the variability of simulated yield predictions while also increasing the accuracy of those predictions. APSIM simulations of crop variables were evaluated against bi-weekly field measurements of phenology, biomass, and leaf area index from early and late planted soybean plots located at the Agricultural Engineering and Agronomy Research Farm in central Iowa as well as the Northwest Research Farm in northwestern Iowa. WRF model predictions were evaluated against observed weather data collected at the experimental fields. Maximum temperature was the most accurately predicted variable, followed by minimum temperature and radiation, and precipitation was least accurate according to RMSE values and the number of days that were forecasted within a 20% error of the observed weather. Our analysis indicated that for the majority of months in the growing season the 3 day forecast performed the best. The 1 week forecast came in second and the 2 week forecast was the least accurate for the majority of months. Preliminary results for yield indicate that the 2 week forecast is the least variable of the forecast lengths, however it also is the least accurate. The 3 day and 1 week forecast have a better accuracy, with an increase in variability.
Properties of Galvanized and Galvannealed Advanced High Strength Hot Rolled Steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
V.Y. Guertsman; E. Essadiqi; S. Dionne
2008-04-01
The objectives of the project were (i) to develop the coating process information to achieve good quality coatings on 3 advanced high strength hot rolled steels while retaining target mechanical properties, (ii) to obtain precise knowledge of the behavior of these steels in the various forming operations and (iii) to establish accurate user property data in the coated conditions. Three steel substrates (HSLA, DP, TRIP) with compositions providing yield strengths in the range of 400-620 MPa were selected. Only HSLA steel was found to be suitable for galnaizing and galvannealing in the hot rolled condition.
NASA Astrophysics Data System (ADS)
Smith, D. P.; Kvitek, R.; Quan, S.; Iampietro, P.; Paddock, E.; Richmond, S. F.; Gomez, K.; Aiello, I. W.; Consulo, P.
2009-12-01
Models of watershed sediment yield are complicated by spatial and temporal variability of geologic substrate, land cover, and precipitation parameters. Episodic events such as ENSO cycles and severe wildfire are frequent enough to matter in the long-term average yield, and they can produce short-lived, extreme geomorphic responses. The sediment yield from extreme events is difficult to accurately capture because of the obvious dangers associated with field measurements during flood conditions, but it is critical to include extreme values for developing realistic models of rainfall-sediment yield relations, and for calculating long term average denudation rates. Dammed rivers provide a time-honored natural laboratory for quantifying average annual sediment yield and extreme-event sediment yield. While lead-line surveys of the past provided crude estimates of reservoir sediment trapping, recent advances in geospatial technology now provide unprecedented opportunities to improve volume change measurements. High-precision digital elevation models surveyed on an annual basis, or before-and-after specific rainfall-runoff events can be used to quantify relations between rainfall and sediment yield as a function of landscape parameters, including spatially explicit fire intensity. The Basin-Complex Fire of June and July 2008 resulted in moderate to severe burns in the 114 km^2 portion of the Carmel River watershed above Los Padres Dam. The US Geological Survey produced a debris flow probability/volume model for the region indicating that the reservoir could lose considerable capacity if intense enough precipitation occurred in the 2009-10 winter. Loss of Los Padres reservoir capacity has implications for endangered steelhead and red-legged frogs, and groundwater on municipal water supply. In anticipation of potentially catastrophic erosion, we produced an accurate volume calculation of the Los Padres reservoir in fall 2009, and locally monitored hillslope and fluvial processes during winter months. The pre-runoff reservoir volume was developed by collecting and merging sonar and LiDAR data from a small research skiff equipped with a high-precision positioning and attitude-correcting system. The terrestrial LiDAR data were augmented with shore-based total station positioning. Watershed monitoring included benchmarked serial stream surveys and semi-quantitative assessment of a variety of near-channel colluvial processes. Rainfall in the 2009-10 water year was not intense enough to trigger widespread debris flows of slope failure in the burned watershed, but dry ravel was apparently accelerated. The geomorphic analysis showed that sediment yield was not significantly higher during this low-rainfall year, despite the wide-spread presence of very steep, fire-impacted slopes. Because there was little to no increase in sediment yield this year, we have postponed our second reservoir survey. A predicted ENSO event that might bring very intense rains to the watershed is currently predicted for winter 2009-10.
34 CFR 300.304 - Evaluation procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of communication and in the form most likely to yield accurate information on what the child knows..., manual, or speaking skills, the assessment results accurately reflect the child's aptitude or achievement... impaired sensory, manual, or speaking skills (unless those skills are the factors that the test purports to...
34 CFR 300.304 - Evaluation procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... or other mode of communication and in the form most likely to yield accurate information on what the... with impaired sensory, manual, or speaking skills, the assessment results accurately reflect the child... reflecting the child's impaired sensory, manual, or speaking skills (unless those skills are the factors that...
34 CFR 300.304 - Evaluation procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... or other mode of communication and in the form most likely to yield accurate information on what the... with impaired sensory, manual, or speaking skills, the assessment results accurately reflect the child... reflecting the child's impaired sensory, manual, or speaking skills (unless those skills are the factors that...
Monitoring stream sediment loads in response to agriculture in Prince Edward Island, Canada.
Alberto, Ashley; St-Hilaire, Andre; Courtenay, Simon C; van den Heuvel, Michael R
2016-07-01
Increased agricultural land use leads to accelerated erosion and deposition of fine sediment in surface water. Monitoring of suspended sediment yields has proven challenging due to the spatial and temporal variability of sediment loading. Reliable sediment yield calculations depend on accurate monitoring of these highly episodic sediment loading events. This study aims to quantify precipitation-induced loading of suspended sediments on Prince Edward Island, Canada. Turbidity is considered to be a reasonably accurate proxy for suspended sediment data. In this study, turbidity was used to monitor suspended sediment concentration (SSC) and was measured for 2 years (December 2012-2014) in three subwatersheds with varying degrees of agricultural land use ranging from 10 to 69 %. Comparison of three turbidity meter calibration methods, two using suspended streambed sediment and one using automated sampling during rainfall events, revealed that the use of SSC samples constructed from streambed sediment was not an accurate replacement for water column sampling during rainfall events for calibration. Different particle size distributions in the three rivers produced significant impacts on the calibration methods demonstrating the need for river-specific calibration. Rainfall-induced sediment loading was significantly greater in the most agriculturally impacted site only when the load per rainfall event was corrected for runoff volume (total flow minus baseflow), flow increase intensity (the slope between the start of a runoff event and the peak of the hydrograph), and season. Monitoring turbidity, in combination with sediment modeling, may offer the best option for management purposes.
Geijsen, Debby E.; Zum Vörde Sive Vörding, Paul J.; Schooneveldt, Gerben; Sijbrands, Jan; Hulshof, Maarten C.; de la Rosette, Jean; de Reijke, Theo M.; Crezee, Hans
2013-01-01
Abstract Background and Purpose: The effectiveness of locoregional hyperthermia combined with intravesical instillation of mitomycin C to reduce the risk of recurrence and progression of intermediate- and high-risk nonmuscle-invasive bladder cancer is currently investigated in clinical trials. Clinically effective locoregional hyperthermia delivery necessitates adequate thermal dosimetry; thus, optimal thermometry methods are needed to monitor accurately the temperature distribution throughout the bladder wall. The aim of the study was to evaluate the technical feasibility of a novel intravesical device (multi-sensor probe) developed to monitor the local bladder wall temperatures during loco-regional C-HT. Materials and Methods: A multisensor thermocouple probe was designed for deployment in the human bladder, using special sensors to cover the bladder wall in different directions. The deployment of the thermocouples against the bladder wall was evaluated with visual, endoscopic, and CT imaging in bladder phantoms, porcine models, and human bladders obtained from obduction for bladder volumes and different deployment sizes of the probe. Finally, porcine bladders were embedded in a phantom and subjected to locoregional heating to compare probe temperatures with additional thermometry inside and outside the bladder wall. Results: The 7.5 cm thermocouple probe yielded optimal bladder wall contact, adapting to different bladder volumes. Temperature monitoring was shown to be accurate and representative for the actual bladder wall temperature. Conclusions: Use of this novel multisensor probe could yield a more accurate monitoring of the bladder wall temperature during locoregional chemohyperthermia. PMID:24112045
Biaxial Testing of 2219-T87 Aluminum Alloy Using Cruciform Specimens
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Pollock, W. D.
1997-01-01
A cruciform biaxial test specimen was designed and seven biaxial tensile tests were conducted on 2219-T87 aluminum alloy. An elastic-plastic finite element analysis was used to simulate each tests and predict the yield stresses. The elastic-plastic finite analysis accurately simulated the measured load-strain behavior for each test. The yield stresses predicted by the finite element analyses indicated that the yield behavior of the 2219-T87 aluminum alloy agrees with the von Mises yield criterion.
NASA Astrophysics Data System (ADS)
Hibino, Daisuke; Hsu, Mingyi; Shindo, Hiroyuki; Izawa, Masayuki; Enomoto, Yuji; Lin, J. F.; Hu, J. R.
2013-04-01
The impact on yield loss due to systematic defect which remains after Optical Proximity Correction (OPC) modeling has increased, and achieving an acceptable yield has become more difficult in the leading technology beyond 20 nm node production. Furthermore Process-Window has become narrow because of the complexity of IC design and less process margin. In the past, the systematic defects have been inspected by human-eyes. However the judgment by human-eyes is sometime unstable and not accurate. Moreover an enormous amount of time and labor will have to be expended on the one-by-one judgment for several thousands of hot-spot defects. In order to overcome these difficulties and improve the yield and manufacturability, the automated system, which can quantify the shape difference with high accuracy and speed, is needed. Inspection points could be increased for getting higher yield, if the automated system achieves our goal. Defect Window Analysis (DWA) system by using high-precision-contour extraction from SEM image on real silicon and quantifying method which can calculate the difference between defect pattern and non-defect pattern automatically, which was developed by Hitachi High-Technologies, has been applied to the defect judgment instead of the judgment by human-eyes. The DWA result which describes process behavior might be feedback to design or OPC or mask. This new methodology and evaluation results will be presented in detail in this paper.
Frameless robotically targeted stereotactic brain biopsy: feasibility, diagnostic yield, and safety.
Bekelis, Kimon; Radwan, Tarek A; Desai, Atman; Roberts, David W
2012-05-01
Frameless stereotactic brain biopsy has become an established procedure in many neurosurgical centers worldwide. Robotic modifications of image-guided frameless stereotaxy hold promise for making these procedures safer, more effective, and more efficient. The authors hypothesized that robotic brain biopsy is a safe, accurate procedure, with a high diagnostic yield and a safety profile comparable to other stereotactic biopsy methods. This retrospective study included 41 patients undergoing frameless stereotactic brain biopsy of lesions (mean size 2.9 cm) for diagnostic purposes. All patients underwent image-guided, robotic biopsy in which the SurgiScope system was used in conjunction with scalp fiducial markers and a preoperatively selected target and trajectory. Forty-five procedures, with 50 supratentorial targets selected, were performed. The mean operative time was 44.6 minutes for the robotic biopsy procedures. This decreased over the second half of the study by 37%, from 54.7 to 34.5 minutes (p < 0.025). The diagnostic yield was 97.8% per procedure, with a second procedure being diagnostic in the single nondiagnostic case. Complications included one transient worsening of a preexisting deficit (2%) and another deficit that was permanent (2%). There were no infections. Robotic biopsy involving a preselected target and trajectory is safe, accurate, efficient, and comparable to other procedures employing either frame-based stereotaxy or frameless, nonrobotic stereotaxy. It permits biopsy in all patients, including those with small target lesions. Robotic biopsy planning facilitates careful preoperative study and optimization of needle trajectory to avoid sulcal vessels, bridging veins, and ventricular penetration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buda, I. G.; Lane, C.; Barbiellini, B.
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Buda, I. G.; Lane, C.; Barbiellini, B.; ...
2017-03-23
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
High-Reproducibility and High-Accuracy Method for Automated Topic Classification
NASA Astrophysics Data System (ADS)
Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes
2015-01-01
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
Chevron facility focused on commercial orifice-meter research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E.H.; Ferguson, K.R.
1987-07-27
Research to determine the accuracy of commercial orifice meters for custody-transfer measurement has indicated that high-volume gas meters can be flow-proven while in such service. The research further yielded more accurate orifice-meter discharge coefficient equations (at Reynolds numbers greater than 4,000,000) than current equations of the International Standards Organization (ISO) and the American Petroleum Institute (API). These are partial findings of a major study conducted by Chevron Oil Field Research Co. at its Venice, La., calibration facility.
A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants
Ewen, James P.; Gattinoni, Chiara; Thakkar, Foram M.; Morgan, Neal; Spikes, Hugh A.; Dini, Daniele
2016-01-01
For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n-hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n-hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n-hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed. PMID:28773773
A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants.
Ewen, James P; Gattinoni, Chiara; Thakkar, Foram M; Morgan, Neal; Spikes, Hugh A; Dini, Daniele
2016-08-02
For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n -hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n -hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n -hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed.
NASA Astrophysics Data System (ADS)
Bhatia, C.; Fallin, B.; Gooden, M. E.; Howell, C. R.; Kelley, J. H.; Tornow, W.; Arnold, C. W.; Bond, E. M.; Bredeweg, T. A.; Fowler, M. M.; Moody, W. A.; Rundberg, R. S.; Rusev, G.; Vieira, D. J.; Wilhelmy, J. B.; Becker, J. A.; Macri, R.; Ryan, C.; Sheets, S. A.; Stoyer, M. A.; Tonchev, A. P.
2014-09-01
A program has been initiated to measure the energy dependence of selected high-yield fission products used in the analysis of nuclear test data. We present out initial work of neutron activation using a dual-fission chamber with quasi-monoenergetic neutrons and gamma-counting method. Quasi-monoenergetic neutrons of energies from 0.5 to 15 MeV using the TUNL 10 MV FM tandem to provide high-precision and self-consistent measurements of fission product yields (FPY). The final FPY results will be coupled with theoretical analysis to provide a more fundamental understanding of the fission process. To accomplish this goal, we have developed and tested a set of dual-fission ionization chambers to provide an accurate determination of the number of fissions occurring in a thick target located in the middle plane of the chamber assembly. Details of the fission chamber and its performance are presented along with neutron beam production and characterization. Also presented are studies on the background issues associated with room-return and off-energy neutron production. We show that the off-energy neutron contribution can be significant, but correctable, while room-return neutron background levels contribute less than <1% to the fission signal.
High-order computer-assisted estimates of topological entropy
NASA Astrophysics Data System (ADS)
Grote, Johannes
The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Ullah, Saleem; Khurshid, Khurram; Ali, Asad
2017-10-01
Leaf Water Content (LWC) is an essential constituent of plant leaves that determines vegetation heath and its productivity. An accurate and on-time measurement of water content is crucial for planning irrigation, forecasting drought and predicting woodland fire. The retrieval of LWC from Visible to Shortwave Infrared (VSWIR: 0.4-2.5 μm) has been extensively investigated but little has been done in the Mid and Thermal Infrared (MIR and TIR: 2.50 -14.0 μm), windows of electromagnetic spectrum. This study is mainly focused on retrieval of LWC from Mid and Thermal Infrared, using Genetic Algorithm integrated with Partial Least Square Regression (PLSR). Genetic Algorithm fused with PLSR selects spectral wavebands with high predictive performance i.e., yields high adjusted-R2 and low RMSE. In our case, GA-PLSR selected eight variables (bands) and yielded highly accurate models with adjusted-R2 of 0.93 and RMSEcv equal to 7.1 %. The study also demonstrated that MIR is more sensitive to the variation in LWC as compared to TIR. However, the combined use of MIR and TIR spectra enhances the predictive performance in retrieval of LWC. The integration of Genetic Algorithm and PLSR, not only increases the estimation precision by selecting the most sensitive spectral bands but also helps in identifying the important spectral regions for quantifying water stresses in vegetation. The findings of this study will allow the future space missions (like HyspIRI) to position wavebands at sensitive regions for characterizing vegetation stresses.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo
2016-09-01
Reliable and accurate mapping and extraction of key forest indicators of ecosystem development and health, such as aboveground biomass (AGB) and aboveground carbon stocks (AGCS) is critical in understanding forests contribution to the local, regional and global carbon cycle. This information is critical in assessing forest contribution towards ecosystem functioning and services, as well as their conservation status. This work aimed at assessing the applicability of the high resolution 8-band WorldView-2 multispectral dataset together with environmental variables in quantifying AGB and aboveground carbon stocks for three forest plantation species i.e. Eucalyptus dunii (ED), Eucalyptus grandis (EG) and Pinus taeda (PT) in uMgeni Catchment, South Africa. Specifically, the strength of the Worldview-2 sensor in terms of its improved imaging agilities is examined as an independent dataset and in conjunction with selected environmental variables. The results have demonstrated that the integration of high resolution 8-band Worldview-2 multispectral data with environmental variables provide improved AGB and AGCS estimates, when compared to the use of spectral data as an independent dataset. The use of integrated datasets yielded a high R2 value of 0.88 and RMSEs of 10.05 t ha-1 and 5.03 t C ha-1 for E. dunii AGB and carbon stocks; whereas the use of spectral data as an independent dataset yielded slightly weaker results, producing an R2 value of 0.73 and an RMSE of 18.57 t ha-1 and 09.29 t C ha-1. Similarly, high accurate results (R2 value of 0.73 and RMSE values of 27.30 t ha-1 and 13.65 t C ha-1) were observed from the estimation of inter-species AGB and carbon stocks. Overall, the findings of this work have shown that the integration of new generation multispectral datasets with environmental variables provide a robust toolset required for the accurate and reliable retrieval of forest aboveground biomass and carbon stocks in densely forested terrestrial ecosystems.
Measurement and prediction of model-rotor flow fields
NASA Technical Reports Server (NTRS)
Owen, F. K.; Tauber, M. E.
1985-01-01
This paper shows that a laser velocimeter can be used to measure accurately the three-component velocities induced by a model rotor at transonic tip speeds. The measurements, which were made at Mach numbers from 0.85 to 0.95 and at zero advance ratio, yielded high-resolution, orthogonal velocity values. The measured velocities were used to check the ability of the ROT22 full-potential rotor code to predict accurately the transonic flow field in the crucial region around and beyond the tip of a high-speed rotor blade. The good agreement between the calculated and measured velocities established the code's ability to predict the off-blade flow field at transonic tip speeds. This supplements previous comparisons in which surface pressures were shown to be well predicted on two different tips at advance ratios to 0.45, especially at the critical 90 deg azimuthal blade position. These results demonstrate that the ROT22 code can be used with confidence to predict the important tip-region flow field, including the occurrence, strength, and location of shock waves causing high drag and noise.
Determination of the QCD Λ Parameter and the Accuracy of Perturbation Theory at High Energies.
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2016-10-28
We discuss the determination of the strong coupling α_{MS[over ¯]}(m_{Z}) or, equivalently, the QCD Λ parameter. Its determination requires the use of perturbation theory in α_{s}(μ) in some scheme s and at some energy scale μ. The higher the scale μ, the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ parameter in three-flavor QCD, we perform lattice computations in a scheme that allows us to nonperturbatively reach very high energies, corresponding to α_{s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a 3% error in the Λ parameter, while data around α_{s}≈0.2 are clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking
NASA Astrophysics Data System (ADS)
Zhai, Chengxing; Shao, Michael; Saini, Navtej; Sandhu, Jagmit; Werne, Thomas; Choi, Philip; Ely, Todd A.; Jacobs, Chirstopher S.; Lazio, Joseph; Martin-Mur, Tomas J.; Owen, William M.; Preston, Robert; Turyshev, Slava; Michell, Adam; Nazli, Kutay; Cui, Isaac; Monchama, Rachel
2018-01-01
Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package that is part of the baseline payload for the planned Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.
High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking
NASA Astrophysics Data System (ADS)
Zhai, C.; Shao, M.; Saini, N. S.; Sandhu, J. S.; Werne, T. A.; Choi, P.; Ely, T. A.; Jacobs, C.; Lazio, J.; Martin-Mur, T. J.; Owen, W. K.; Preston, R. A.; Turyshev, S. G.
2017-12-01
Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package to be carried on the Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu
2017-01-01
Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901
Desktop aligner for fabrication of multilayer microfluidic devices.
Li, Xiang; Yu, Zeta Tak For; Geraldo, Dalton; Weng, Shinuo; Alve, Nitesh; Dun, Wu; Kini, Akshay; Patel, Karan; Shu, Roberto; Zhang, Feng; Li, Gang; Jin, Qinghui; Fu, Jianping
2015-07-01
Multilayer assembly is a commonly used technique to construct multilayer polydimethylsiloxane (PDMS)-based microfluidic devices with complex 3D architecture and connectivity for large-scale microfluidic integration. Accurate alignment of structure features on different PDMS layers before their permanent bonding is critical in determining the yield and quality of assembled multilayer microfluidic devices. Herein, we report a custom-built desktop aligner capable of both local and global alignments of PDMS layers covering a broad size range. Two digital microscopes were incorporated into the aligner design to allow accurate global alignment of PDMS structures up to 4 in. in diameter. Both local and global alignment accuracies of the desktop aligner were determined to be about 20 μm cm(-1). To demonstrate its utility for fabrication of integrated multilayer PDMS microfluidic devices, we applied the desktop aligner to achieve accurate alignment of different functional PDMS layers in multilayer microfluidics including an organs-on-chips device as well as a microfluidic device integrated with vertical passages connecting channels located in different PDMS layers. Owing to its convenient operation, high accuracy, low cost, light weight, and portability, the desktop aligner is useful for microfluidic researchers to achieve rapid and accurate alignment for generating multilayer PDMS microfluidic devices.
Desktop aligner for fabrication of multilayer microfluidic devices
Li, Xiang; Yu, Zeta Tak For; Geraldo, Dalton; Weng, Shinuo; Alve, Nitesh; Dun, Wu; Kini, Akshay; Patel, Karan; Shu, Roberto; Zhang, Feng; Li, Gang; Jin, Qinghui; Fu, Jianping
2015-01-01
Multilayer assembly is a commonly used technique to construct multilayer polydimethylsiloxane (PDMS)-based microfluidic devices with complex 3D architecture and connectivity for large-scale microfluidic integration. Accurate alignment of structure features on different PDMS layers before their permanent bonding is critical in determining the yield and quality of assembled multilayer microfluidic devices. Herein, we report a custom-built desktop aligner capable of both local and global alignments of PDMS layers covering a broad size range. Two digital microscopes were incorporated into the aligner design to allow accurate global alignment of PDMS structures up to 4 in. in diameter. Both local and global alignment accuracies of the desktop aligner were determined to be about 20 μm cm−1. To demonstrate its utility for fabrication of integrated multilayer PDMS microfluidic devices, we applied the desktop aligner to achieve accurate alignment of different functional PDMS layers in multilayer microfluidics including an organs-on-chips device as well as a microfluidic device integrated with vertical passages connecting channels located in different PDMS layers. Owing to its convenient operation, high accuracy, low cost, light weight, and portability, the desktop aligner is useful for microfluidic researchers to achieve rapid and accurate alignment for generating multilayer PDMS microfluidic devices. PMID:26233409
Gradient Augmented Level Set Method for Two Phase Flow Simulations with Phase Change
NASA Astrophysics Data System (ADS)
Anumolu, C. R. Lakshman; Trujillo, Mario F.
2016-11-01
A sharp interface capturing approach is presented for two-phase flow simulations with phase change. The Gradient Augmented Levelset method is coupled with the two-phase momentum and energy equations to advect the liquid-gas interface and predict heat transfer with phase change. The Ghost Fluid Method (GFM) is adopted for velocity to discretize the advection and diffusion terms in the interfacial region. Furthermore, the GFM is employed to treat the discontinuity in the stress tensor, velocity, and temperature gradient yielding an accurate treatment in handling jump conditions. Thermal convection and diffusion terms are approximated by explicitly identifying the interface location, resulting in a sharp treatment for the energy solution. This sharp treatment is extended to estimate the interfacial mass transfer rate. At the computational cell, a d-cubic Hermite interpolating polynomial is employed to describe the interface location, which is locally fourth-order accurate. This extent of subgrid level description provides an accurate methodology for treating various interfacial processes with a high degree of sharpness. The ability to predict the interface and temperature evolutions accurately is illustrated by comparing numerical results with existing 1D to 3D analytical solutions.
Application of Eyring's thermal activation theory to constitutive equations for polymers
NASA Astrophysics Data System (ADS)
Zerilli, Frank J.; Armstrong, Ronald W.
2000-04-01
The application of a constitutive model based on the thermal activation theory of Eyring to the yield stress of polymethylmethacrylate at various temperatures and strain rates, as measured by Bauwens-Crowet, shows that the yield stress may reasonably well be described by a thermal activation equation in which the volume of activation is inversely proportional to the yield stress. It is found that, to obtain an accurate model, the dependence of the cold (T=0 K) yield stress on the shear modulus must be taken into account.
Toward more accurate loss tangent measurements in reentrant cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moyer, R. D.
1980-05-01
Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.
High resolution land surface response of inland moving Indian monsoon depressions over Bay of Bengal
NASA Astrophysics Data System (ADS)
Rajesh, P. V.; Pattnaik, S.
2016-05-01
During Indian summer monsoon (ISM) season, nearly about half of the monsoonal rainfall is brought inland by the low pressure systems called as Monsoon Depressions (MDs). These systems bear large amount of rainfall and frequently give copious amount of rainfall over land regions, therefore accurate forecast of these synoptic scale systems at short time scale can help in disaster management, flood relief, food safety. The goal of this study is to investigate, whether an accurate moisture-rainfall feedback from land surface can improve the prediction of inland moving MDs. High Resolution Land Data Assimilation System (HRLDAS) is used to generate improved land state .i.e. soil moisture and soil temperature profiles by means of NOAH-MP land-surface model. Validation of the model simulated basic atmospheric parameters at surface layer and troposphere reveals that the incursion of high resolution land state yields least Root Mean Squared Error (RMSE) with a higher correlation coefficient and facilitates accurate depiction of MDs. Rainfall verification shows that HRLDAS simulations are spatially and quantitatively in more agreement with the observations and the improved surface characteristics could result in the realistic reproduction of the storm spatial structure, movement as well as intensity. These results signify the necessity of investigating more into the land surface-rainfall feedbacks through modifications in moisture flux convergence within the storm.
Evaluation of seeding depth and guage-wheel load effects on maize emergence and yield
USDA-ARS?s Scientific Manuscript database
Planting represents perhaps the most important field operation with errors likely to negatively affect crop yield and thereby farm profitability. Performance of row-crop planters are evaluated by their ability to accurately place seeds into the soil at an adequate and pre-determined depth, the goal ...
Specific energy yield comparison between crystalline silicon and amorphous silicon based PV modules
NASA Astrophysics Data System (ADS)
Ferenczi, Toby; Stern, Omar; Hartung, Marianne; Mueggenburg, Eike; Lynass, Mark; Bernal, Eva; Mayer, Oliver; Zettl, Marcus
2009-08-01
As emerging thin-film PV technologies continue to penetrate the market and the number of utility scale installations substantially increase, detailed understanding of the performance of the various PV technologies becomes more important. An accurate database for each technology is essential for precise project planning, energy yield prediction and project financing. However recent publications showed that it is very difficult to get accurate and reliable performance data of theses technologies. This paper evaluates previously reported claims the amorphous silicon based PV modules have a higher annual energy yield compared to crystalline silicon modules relative to their rated performance. In order to acquire a detailed understanding of this effect, outdoor module tests were performed at GE Global Research Center in Munich. In this study we examine closely two of the five reported factors that contribute to enhanced energy yield of amorphous silicon modules. We find evidence to support each of these factors and evaluate their relative significance. We discuss aspects for improvement in how PV modules are sold and identify areas for further study further study.
NASA Technical Reports Server (NTRS)
Kranz, David William
2010-01-01
The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
Haines, Brian M.; Aldrich, C. H.; Campbell, J. M.; ...
2017-04-24
In this study, we present the results of high-resolution simulations of the implosion of high-convergence layered indirect-drive inertial confinement fusion capsules of the type fielded on the National Ignition Facility using the xRAGE radiation-hydrodynamics code. In order to evaluate the suitability of xRAGE to model such experiments, we benchmark simulation results against available experimental data, including shock-timing, shock-velocity, and shell trajectory data, as well as hydrodynamic instability growth rates. We discuss the code improvements that were necessary in order to achieve favorable comparisons with these data. Due to its use of adaptive mesh refinement and Eulerian hydrodynamics, xRAGE is particularlymore » well suited for high-resolution study of multi-scale engineering features such as the capsule support tent and fill tube, which are known to impact the performance of high-convergence capsule implosions. High-resolution two-dimensional (2D) simulations including accurate and well-resolved models for the capsule fill tube, support tent, drive asymmetry, and capsule surface roughness are presented. These asymmetry seeds are isolated in order to study their relative importance and the resolution of the simulations enables the observation of details that have not been previously reported. We analyze simulation results to determine how the different asymmetries affect hotspot reactivity, confinement, and confinement time and how these combine to degrade yield. Yield degradation associated with the tent occurs largely through decreased reactivity due to the escape of hot fuel mass from the hotspot. Drive asymmetries and the fill tube, however, degrade yield primarily via burn truncation, as associated instability growth accelerates the disassembly of the hotspot. Finally, modeling all of these asymmetries together in 2D leads to improved agreement with experiment but falls short of explaining the experimentally observed yield degradation, consistent with previous 2D simulations of such capsules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, David R.; Bershady, Matthew A., E-mail: david.andersen@nrc-cnrc.gc.ca, E-mail: mab@astro.wisc.edu
2013-05-01
Using the integral field unit DensePak on the WIYN 3.5 m telescope we have obtained H{alpha} velocity fields of 39 nearly face-on disks at echelle resolutions. High-quality, uniform kinematic data and a new modeling technique enabled us to derive accurate and precise kinematic inclinations with mean i{sub kin} = 23 Degree-Sign for 90% of these galaxies. Modeling the kinematic data as single, inclined disks in circular rotation improves upon the traditional tilted-ring method. We measure kinematic inclinations with a precision in sin i of 25% at 20 Degree-Sign and 6% at 30 Degree-Sign . Kinematic inclinations are consistent with photometricmore » and inverse Tully-Fisher inclinations when the sample is culled of galaxies with kinematic asymmetries, for which we give two specific prescriptions. Kinematic inclinations can therefore be used in statistical ''face-on'' Tully-Fisher studies. A weighted combination of multiple, independent inclination measurements yield the most precise and accurate inclination. Combining inverse Tully-Fisher inclinations with kinematic inclinations yields joint probability inclinations with a precision in sin i of 10% at 15 Degree-Sign and 5% at 30 Degree-Sign . This level of precision makes accurate mass decompositions of galaxies possible even at low inclination. We find scaling relations between rotation speed and disk-scale length identical to results from more inclined samples. We also observe the trend of more steeply rising rotation curves with increased rotation speed and light concentration. This trend appears to be uncorrelated with disk surface brightness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-06-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-03-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
NASA Technical Reports Server (NTRS)
Guruswamy, G. P.; Goorjian, P. M.
1984-01-01
An efficient coordinate transformation technique is presented for constructing grids for unsteady, transonic aerodynamic computations for delta-type wings. The original shearing transformation yielded computations that were numerically unstable and this paper discusses the sources of those instabilities. The new shearing transformation yields computations that are stable, fast, and accurate. Comparisons of those two methods are shown for the flow over the F5 wing that demonstrate the new stability. Also, comparisons are made with experimental data that demonstrate the accuracy of the new method. The computations were made by using a time-accurate, finite-difference, alternating-direction-implicit (ADI) algorithm for the transonic small-disturbance potential equation.
Nonempirical range-separated hybrid functionals for solids and molecules
Skone, Jonathan H.; Govoni, Marco; Galli, Giulia
2016-06-03
Dielectric-dependent hybrid (DDH) functionals were recently shown to yield accurate energy gaps and dielectric constants for a wide variety of solids, at a computational cost considerably less than that of GW calculations. The fraction of exact exchange included in the definition of DDH functionals depends (self-consistently) on the dielectric constant of the material. Here we introduce a range-separated (RS) version of DDH functionals where short and long-range components are matched using system dependent, non-empirical parameters. We show that RS DDHs yield accurate electronic properties of inorganic and organic solids, including energy gaps and absolute ionization potentials. Moreover, we show thatmore » these functionals may be generalized to finite systems.« less
NASA Astrophysics Data System (ADS)
Wu, T.; Li, T.; Li, J.; Wang, G.
2017-12-01
Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.
The Occurrence Rate of Hot Jupiters
NASA Astrophysics Data System (ADS)
Rampalli, Rayna; Catanzarite, Joseph; Batalha, Natalie M.
2017-01-01
As the first kind of exoplanet to be discovered, hot Jupiters have always been objects of interest. Despite being prevalent in radial velocity and ground-based surveys, they were found to be much rarer based on Kepler observations. These data show a pile-up at radii of 9-22 Rearth and orbital periods of 1-10 days. Computing accurate occurrence rates can lend insight into planet-formation and migration-theories. To get a more accurate look, the idea of reliability was introduced. Each hot Jupiter candidate was assigned a reliability based on its location in the galactic plane and likelihood of being a false positive. Numbers were updated if ground-based follow-up indicated a candidate was indeed a false positive. These reliabilities were introduced into an occurrence rate calculation and yielded about a 12% decrease in occurrence rate for each period bin examined and a 25% decrease across all the bins. To get a better idea of the cause behind the pileup, occurrence rates based on parent stellar metallicity were calculated. As expected from previous work, higher metallicity stars yield higher occurrence rates. Future work includes examining period distributions in both the high metallicity and low metallicity sample for a better understanding and confirmation of the pile-up effect.
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
MRI volumetry of prefrontal cortex
NASA Astrophysics Data System (ADS)
Sheline, Yvette I.; Black, Kevin J.; Lin, Daniel Y.; Pimmel, Joseph; Wang, Po; Haller, John W.; Csernansky, John G.; Gado, Mokhtar; Walkup, Ronald K.; Brunsden, Barry S.; Vannier, Michael W.
1995-05-01
Prefrontal cortex volumetry by brain magnetic resonance (MR) is required to estimate changes postulated to occur in certain psychiatric and neurologic disorders. A semiautomated method with quantitative characterization of its performance is sought to reliably distinguish small prefrontal cortex volume changes within individuals and between groups. Stereological methods were tested by a blinded comparison of measurements applied to 3D MR scans obtained using an MPRAGE protocol. Fixed grid stereologic methods were used to estimate prefrontal cortex volumes on a graphic workstation, after the images are scaled from 16 to 8 bits using a histogram method. In addition images were resliced into coronal sections perpendicular to the bicommissural plane. Prefrontal cortex volumes were defined as all sections of the frontal lobe anterior to the anterior commissure. Ventricular volumes were excluded. Stereological measurement yielded high repeatability and precision, and was time efficient for the raters. The coefficient of error was
1980-09-01
group. Perhaps- people in a more fully closed group would be more accurate. 7. The data we collected were essentially precognitive . Perhaps postcognitive...sets yield the fol- lowing results: I. Postcognitive data are (mainly) more accurate than precognitive , but not signif- icantly so. 2. With the
Spectrally based mapping of riverbed composition
Legleiter, Carl; Stegman, Tobin K.; Overstreet, Brandon T.
2016-01-01
Remote sensing methods provide an efficient means of characterizing fluvial systems. This study evaluated the potential to map riverbed composition based on in situ and/or remote measurements of reflectance. Field spectra and substrate photos from the Snake River, Wyoming, USA, were used to identify different sediment facies and degrees of algal development and to quantify their optical characteristics. We hypothesized that accounting for the effects of depth and water column attenuation to isolate the reflectance of the streambed would enhance distinctions among bottom types and facilitate substrate classification. A bottom reflectance retrieval algorithm adapted from coastal research yielded realistic spectra for the 450 to 700 nm range; but bottom reflectance-based substrate classifications, generated using a random forest technique, were no more accurate than classifications derived from above-water field spectra. Additional hypothesis testing indicated that a combination of reflectance magnitude (brightness) and indices of spectral shape provided the most accurate riverbed classifications. Convolving field spectra to the response functions of a multispectral satellite and a hyperspectral imaging system did not reduce classification accuracies, implying that high spectral resolution was not essential. Supervised classifications of algal density produced from hyperspectral data and an inferred bottom reflectance image were not highly accurate, but unsupervised classification of the bottom reflectance image revealed distinct spectrally based clusters, suggesting that such an image could provide additional river information. We attribute the failure of bottom reflectance retrieval to yield more reliable substrate maps to a latent correlation between depth and bottom type. Accounting for the effects of depth might have eliminated a key distinction among substrates and thus reduced discriminatory power. Although further, more systematic study across a broader range of fluvial environments is needed to substantiate our initial results, this case study suggests that bed composition in shallow, clear-flowing rivers potentially could be mapped remotely.
A Remote Sensing-Derived Corn Yield Assessment Model
NASA Astrophysics Data System (ADS)
Shrestha, Ranjay Man
Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.
Ray Effect Mitigation Through Reference Frame Rotation
Tencer, John
2016-05-01
The discrete ordinates method is a popular and versatile technique for solving the radiative transport equation, a major drawback of which is the presence of ray effects. Mitigation of ray effects can yield significantly more accurate results and enhanced numerical stability for combined mode codes. Moreover, when ray effects are present, the solution is seen to be highly dependent upon the relative orientation of the geometry and the global reference frame. It is an undesirable property. A novel ray effect mitigation technique of averaging the computed solution for various reference frame orientations is proposed.
A High-Performance Parallel Implementation of the Certified Reduced Basis Method
2010-12-15
point of view of model reduction due to the “curse of dimensionality”. We consider transient thermal conduction in a three– dimensional “ Swiss cheese ... Swiss cheese ” problem (see Figure 7a) there are 54 unique ordered pairs in I. A histogram of 〈δµ〉 values computed for the ntrain = 106 case is given in...our primal-dual RB method yields a very fast and accurate output approxima- tion for the “ Swiss Cheese ” problem. Our goal in this final subsection is
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
NASA Astrophysics Data System (ADS)
Reichert, Andreas; Rettinger, Markus; Sussmann, Ralf
2016-09-01
Quantitative knowledge of water vapor absorption is crucial for accurate climate simulations. An open science question in this context concerns the strength of the water vapor continuum in the near infrared (NIR) at atmospheric temperatures, which is still to be quantified by measurements. This issue can be addressed with radiative closure experiments using solar absorption spectra. However, the spectra used for water vapor continuum quantification have to be radiometrically calibrated. We present for the first time a method that yields sufficient calibration accuracy for NIR water vapor continuum quantification in an atmospheric closure experiment. Our method combines the Langley method with spectral radiance measurements of a high-temperature blackbody calibration source (< 2000 K). The calibration scheme is demonstrated in the spectral range 2500 to 7800 cm-1, but minor modifications to the method enable calibration also throughout the remainder of the NIR spectral range. The resulting uncertainty (2σ) excluding the contribution due to inaccuracies in the extra-atmospheric solar spectrum (ESS) is below 1 % in window regions and up to 1.7 % within absorption bands. The overall radiometric accuracy of the calibration depends on the ESS uncertainty, on which at present no firm consensus has been reached in the NIR. However, as is shown in the companion publication Reichert and Sussmann (2016), ESS uncertainty is only of minor importance for the specific aim of this study, i.e., the quantification of the water vapor continuum in a closure experiment. The calibration uncertainty estimate is substantiated by the investigation of calibration self-consistency, which yields compatible results within the estimated errors for 91.1 % of the 2500 to 7800 cm-1 range. Additionally, a comparison of a set of calibrated spectra to radiative transfer model calculations yields consistent results within the estimated errors for 97.7 % of the spectral range.
Sun, Ye; Tao, Jing; Zhang, Geoff G Z; Yu, Lian
2010-09-01
A previous method for measuring solubilities of crystalline drugs in polymers has been improved to enable longer equilibration and used to survey the solubilities of indomethacin (IMC) and nifedipine (NIF) in two homo-polymers [polyvinyl pyrrolidone (PVP) and polyvinyl acetate (PVAc)] and their co-polymer (PVP/VA). These data are important for understanding the stability of amorphous drug-polymer dispersions, a strategy actively explored for delivering poorly soluble drugs. Measuring solubilities in polymers is difficult because their high viscosities impede the attainment of solubility equilibrium. In this method, a drug-polymer mixture prepared by cryo-milling is annealed at different temperatures and analyzed by differential scanning calorimetry to determine whether undissolved crystals remain and thus the upper and lower bounds of the equilibrium solution temperature. The new annealing method yielded results consistent with those obtained with the previous scanning method at relatively high temperatures, but revised slightly the previous results at lower temperatures. It also lowered the temperature of measurement closer to the glass transition temperature. For D-mannitol and IMC dissolving in PVP, the polymer's molecular weight has little effect on the weight-based solubility. For IMC and NIF, the dissolving powers of the polymers follow the order PVP > PVP/VA > PVAc. In each polymer studied, NIF is less soluble than IMC. The activities of IMC and NIF dissolved in various polymers are reasonably well fitted to the Flory-Huggins model, yielding the relevant drug-polymer interaction parameters. The new annealing method yields more accurate data than the previous scanning method when solubility equilibrium is slow to achieve. In practice, these two methods can be combined for efficiency. The measured solubilities are not readily anticipated, which underscores the importance of accurate experimental data for developing predictive models.
Adnan, Adnan A.; Jibrin, Jibrin M.; Kamara, Alpha Y.; Abdulrahman, Bassam L.; Shaibu, Abdulwahab S.; Garba, Ismail I.
2017-01-01
Field trials were carried out in the Sudan Savannah of Nigeria to assess the usefulness of CERES–maize crop model as a decision support tool for optimizing maize production through manipulation of plant dates. The calibration experiments comprised of 20 maize varieties planted during the dry and rainy seasons of 2014 and 2015 at Bayero University Kano and Audu Bako College of Agriculture Dambatta. The trials for model evaluation were conducted in 16 different farmer fields across the Sudan (Bunkure and Garun—Mallam) and Northern Guinea (Tudun-Wada and Lere) Savannas using two of the calibrated varieties under four different sowing dates. The model accurately predicted grain yield, harvest index, and biomass of both varieties with low RMSE-values (below 5% of mean), high d-index (above 0.8), and high r-square (above 0.9) for the calibration trials. The time series data (tops weight, stem and leaf dry weights) were also predicted with high accuracy (% RMSEn above 70%, d-index above 0.88). Similar results were also observed for the evaluation trials, where all variables were simulated with high accuracies. Estimation efficiencies (EF)-values above 0.8 were observed for all the evaluation parameters. Seasonal and sensitivity analyses on Typic Plinthiustalfs and Plinthic Kanhaplustults in the Sudan and Northern Guinea Savannas were conducted. Results showed that planting extra early maize varieties in late July and early maize in mid-June leads to production of highest grain yields in the Sudan Savanna. In the Northern Guinea Savanna planting extra-early maize in mid-July and early maize in late July produced the highest grain yields. Delaying planting in both Agro-ecologies until mid-August leads to lower yields. Delaying planting to mid-August led to grain yield reduction of 39.2% for extra early maize and 74.4% for early maize in the Sudan Savanna. In the Northern Guinea Savanna however, delaying planting to mid-August resulted in yield reduction of 66.9 and 94.3% for extra-early and early maize, respectively. PMID:28702039
Benchmarking Attosecond Physics with Atomic Hydrogen
2015-05-25
theoretical simulations are available in this regime. We provided accurate reference data on the photoionization yield and the CEP-dependent...this difficulty. This experiment claimed to show that, contrary to current understanding, the photoionization of an atomic electron is not an... photoion yield and transferrable intensity calibration. The dependence of photoionization probability on laser intensity is one of the most
How Do Various Maize Crop Models Vary in Their Responses to Climate Change Factors?
NASA Technical Reports Server (NTRS)
Bassu, Simona; Brisson, Nadine; Grassini, Patricio; Durand, Jean-Louis; Boote, Kenneth; Lizaso, Jon; Jones, James W.; Rosenzweig, Cynthia; Ruane, Alex C.; Adam, Myriam;
2014-01-01
Potential consequences of climate change on crop production can be studied using mechanistic crop simulation models. While a broad variety of maize simulation models exist, it is not known whether different models diverge on grain yield responses to changes in climatic factors, or whether they agree in their general trends related to phenology, growth, and yield. With the goal of analyzing the sensitivity of simulated yields to changes in temperature and atmospheric carbon dioxide concentrations [CO2], we present the largest maize crop model intercomparison to date, including 23 different models. These models were evaluated for four locations representing a wide range of maize production conditions in the world: Lusignan (France), Ames (USA), Rio Verde (Brazil) and Morogoro (Tanzania). While individual models differed considerably in absolute yield simulation at the four sites, an ensemble of a minimum number of models was able to simulate absolute yields accurately at the four sites even with low data for calibration, thus suggesting that using an ensemble of models has merit. Temperature increase had strong negative influence on modeled yield response of roughly -0.5 Mg ha(sup 1) per degC. Doubling [CO2] from 360 to 720 lmol mol 1 increased grain yield by 7.5% on average across models and the sites. That would therefore make temperature the main factor altering maize yields at the end of this century. Furthermore, there was a large uncertainty in the yield response to [CO2] among models. Model responses to temperature and [CO2] did not differ whether models were simulated with low calibration information or, simulated with high level of calibration information.
How do various maize crop models vary in their responses to climate change factors?
Bassu, Simona; Brisson, Nadine; Durand, Jean-Louis; Boote, Kenneth; Lizaso, Jon; Jones, James W; Rosenzweig, Cynthia; Ruane, Alex C; Adam, Myriam; Baron, Christian; Basso, Bruno; Biernath, Christian; Boogaard, Hendrik; Conijn, Sjaak; Corbeels, Marc; Deryng, Delphine; De Sanctis, Giacomo; Gayler, Sebastian; Grassini, Patricio; Hatfield, Jerry; Hoek, Steven; Izaurralde, Cesar; Jongschaap, Raymond; Kemanian, Armen R; Kersebaum, K Christian; Kim, Soo-Hyung; Kumar, Naresh S; Makowski, David; Müller, Christoph; Nendel, Claas; Priesack, Eckart; Pravia, Maria Virginia; Sau, Federico; Shcherbak, Iurii; Tao, Fulu; Teixeira, Edmar; Timlin, Dennis; Waha, Katharina
2014-07-01
Potential consequences of climate change on crop production can be studied using mechanistic crop simulation models. While a broad variety of maize simulation models exist, it is not known whether different models diverge on grain yield responses to changes in climatic factors, or whether they agree in their general trends related to phenology, growth, and yield. With the goal of analyzing the sensitivity of simulated yields to changes in temperature and atmospheric carbon dioxide concentrations [CO2 ], we present the largest maize crop model intercomparison to date, including 23 different models. These models were evaluated for four locations representing a wide range of maize production conditions in the world: Lusignan (France), Ames (USA), Rio Verde (Brazil) and Morogoro (Tanzania). While individual models differed considerably in absolute yield simulation at the four sites, an ensemble of a minimum number of models was able to simulate absolute yields accurately at the four sites even with low data for calibration, thus suggesting that using an ensemble of models has merit. Temperature increase had strong negative influence on modeled yield response of roughly -0.5 Mg ha(-1) per °C. Doubling [CO2 ] from 360 to 720 μmol mol(-1) increased grain yield by 7.5% on average across models and the sites. That would therefore make temperature the main factor altering maize yields at the end of this century. Furthermore, there was a large uncertainty in the yield response to [CO2 ] among models. Model responses to temperature and [CO2 ] did not differ whether models were simulated with low calibration information or, simulated with high level of calibration information. © 2014 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Jensen, J. R.; Tinney, L. R.; Estes, J. E.
1975-01-01
Cropland inventories utilizing high altitude and Landsat imagery were conducted in Kern County, California. It was found that in terms of the overall mean relative and absolute inventory accuracies, a Landsat multidate analysis yielded the most optimum results, i.e., 98% accuracy. The 1:125,000 CIR high altitude inventory is a serious alternative which can be very accurate (97% or more) if imagery is available for a specific study area. The operational remote sensing cropland inventories documented in this study are considered cost-effective. When compared to conventional survey costs of $62-66 per 10,000 acres, the Landsat and high-altitude inventories required only 3-5% of this amount, i.e., $1.97-2.98.
Burgess, Helen J.; Wyatt, James K.; Park, Margaret; Fogg, Louis F.
2015-01-01
Study Objectives: There is a need for the accurate assessment of circadian phase outside of the clinic/laboratory, particularly with the gold standard dim light melatonin onset (DLMO). We tested a novel kit designed to assist in saliva sampling at home for later determination of the DLMO. The home kit includes objective measures of compliance to the requirements for dim light and half-hourly saliva sampling. Design: Participants were randomized to one of two 10-day protocols. Each protocol consisted of two back-to-back home and laboratory phase assessments in counterbalanced order, separated by a 5-day break. Setting: Laboratory or participants' homes. Participants: Thirty-five healthy adults, age 21–62 y. Interventions: N/A. Measurements and Results: Most participants received at least one 30-sec epoch of light > 50 lux during the home phase assessments (average light intensity 4.5 lux), but on average for < 9 min of the required 8.5 h. Most participants collected every saliva sample within 5 min of the scheduled time. Ninety-two percent of home DLMOs were not affected by light > 50 lux or sampling errors. There was no significant difference between the home and laboratory DLMOs (P > 0.05); on average the home DLMOs occurred 9.6 min before the laboratory DLMOs. The home DLMOs were highly correlated with the laboratory DLMOs (r = 0.91, P < 0.001). Conclusions: Participants were reasonably compliant to the home phase assessment procedures. The good agreement between the home and laboratory dim light melatonin onsets (DLMOs) demonstrates that including objective measures of light exposure and sample timing during home saliva sampling can lead to accurate home DLMOs. Clinical Trial Registration: Circadian Phase Assessments at Home, http://clinicaltrials.gov/show/NCT01487252, NCT01487252. Citation: Burgess HJ, Wyatt JK, Park M, Fogg LF. Home circadian phase assessments with measures of compliance yield accurate dim light melatonin onsets. SLEEP 2015;38(6):889–897. PMID:25409110
NASA Astrophysics Data System (ADS)
Bouman, C.; Lloyd, N. S.; Schwieters, J.
2011-12-01
The accurate and precise determination of uranium isotopes is challenging, because of the large dynamic range posed by the U isotope abundances and the limited available sample material. Various mass spectrometric techniques are used for the measurement of U isotopes, where TIMS is the most accepted and accurate one. Multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) can offer higher productivity compared to TIMS, but is traditionally limited by low efficiency of sample utilisation. This contribution will discuss progress in MC-ICPMS for detecting 234U, 235U, 236U and 238U in various uranium reference materials from IRMM and NBL. The Thermo Scientific NEPTUNE Plus with Jet Interface offers a modified dry plasma ICP interface using a large interface pump combined with a special set of sample and skimmer cones giving ultimate sensitivity for all elements across the mass range. For uranium, an ion yield of > 3 % was reported previously [1]. The NEPTUNE Plus also offers Multi Ion Counting using discrete dynode electron multipliers as well as two high abundance-sensitivity filters to discriminate against peak tailing effects on 234U and 236U originating from the major uranium beams. These improvements in sensitivity and dynamic range allow accurate measurements of 234U, 235U and 236U abundances on very small samples and at low concentration. In our approach, minor U isotopes 234U and 236U were detected on ion counters with high abundance sensitivity filters, whereas 235U and 238U were detected on Faraday Cups using a high gain current amplifier (10e12 Ohm) for 235U. Precisions and accuracies for 234U and 236U were down to ~1%. For 235U, subpermil levels were reached.
Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona
2017-04-15
Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Absolute dimensions and masses of eclipsing binaries. V. IQ Persei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, C.H.; Frueh, M.L.
1985-08-01
New photometric and spectroscopic observations of the 1.7 day eclipsing binary IQ Persei (B8 + A6) have been analyzed to yield very accurate fundamental properties of the system. Reticon spectroscopic observations obtained at McDonald Observatory were used to determine accurate radial velocities of both stars in this slightly eccentric large light-ratio binary. A new set of VR light curves obtained at McDonald Observatory were analyzed by synthesis techniques, and previously published UBV light curves were reanalyzed to yield accurate photometric orbits. Orbital parameters derived from both sets of photometric observations are in excellent agreement. The absolute dimensions, masses, luminosities, andmore » apsidal motion period (140 yr) derived from these observations agree well with the predictions of theoretical stellar evolution models. The A6 secondary is still very close to the zero-age main sequence. The B8 primary is about one-third of the way through its main-sequence evolution. 27 references.« less
The Importance of Juvenile Root Traits for Crop Yields
NASA Astrophysics Data System (ADS)
White, Philip; Adu, Michael; Broadley, Martin; Brown, Lawrie; Dupuy, Lionel; George, Timothy; Graham, Neil; Hammond, John; Hayden, Rory; Neugebauer, Konrad; Nightingale, Mark; Ramsay, Gavin; Thomas, Catherine; Thompson, Jacqueline; Wishart, Jane; Wright, Gladys
2014-05-01
Genetic variation in root system architecture (RSA) is an under-exploited breeding resource. This is partly a consequence of difficulties in the rapid and accurate assessment of subterranean root systems. However, although the characterisation of root systems of large plants in the field are both time-consuming and labour-intensive, high-throughput (HTP) screens of root systems of juvenile plants can be performed in the field, glasshouse or laboratory. It is hypothesised that improving the root systems of juvenile plants can accelerate access to water and essential mineral elements, leading to rapid crop establishment and, consequently, greater yields. This presentation will illustrate how aspects of the juvenile root systems of potato (Solanum tuberosum L.) and oilseed rape (OSR; Brassica napus L.) correlate with crop yields and examine the reasons for such correlations. It will first describe the significant positive relationships between early root system development, phosphorus acquisition, canopy establishment and eventual yield among potato genotypes. It will report the development of a glasshouse assay for root system architecture (RSA) of juvenile potato plants, the correlations between root system architectures measured in the glasshouse and field, and the relationships between aspects of the juvenile root system and crop yields under drought conditions. It will then describe the development of HTP systems for assaying RSA of OSR seedlings, the identification of genetic loci affecting RSA in OSR, the development of mathematical models describing resource acquisition by OSR, and the correlations between root traits recorded in the HTP systems and yields of OSR in the field.
The origin and evolution of safe-yield policies in the Kansas groundwater management districts
Sophocleous, M.
2000-01-01
The management of groundwater resources in Kansas continues to evolve. Declines in the High Plains aquifer led to the establishment of groundwater management districts in the mid-1970s and reduced streamflows prompted the enactment of minimum desirable streamflow standards in the mid-1980s. Nonetheless, groundwater levels and streamflows continued to decline, although at reduced rates compared to premid-1980s rates. As a result, "safe-yield" policies were revised to take into account natural groundwater discharge in the form of stream baseflow. These policies, although a step in the right direction, are deficient in several ways. In addition to the need for more accurate recharge data, pumping-induced streamflow depletion, natural stream losses, and groundwater evapotranspiration need to be accounted for in the revised safe-yield policies. Furthermore, the choice of the 90% flow-duration statistic as a measure of baseflow needs to be reevaluated, as it significantly underestimates mean baseflow estimated from baseflow separation computer programs; moreover, baseflow estimation needs to be refined and validated. ?? 2000 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Sankey, J. B.; Kreitler, J.; McVay, J.; Hawbaker, T. J.; Vaillant, N.; Lowe, S. E.
2014-12-01
Wildland fire is a primary threat to watersheds that can impact water supply through increased sedimentation, water quality decline, and change the timing and amount of runoff leading to increased risk from flood and sediment natural hazards. It is of great societal importance in the western USA and throughout the world to improve understanding of how changing fire frequency, extent, and location, in conjunction with fuel treatments will affect watersheds and the ecosystem services they supply to communities. In this work we assess the utility of the InVEST Sediment Retention Model to accurately characterize vulnerability of burned watersheds to erosion and sedimentation. The InVEST tools are GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., RUSLE -Revised Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. We evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured post-fire sedimentation rates available for many watersheds in different rainfall regimes throughout the western USA from an existing, large USGS database of post-fire sediment yield [synthesized in Moody J, Martin D (2009) Synthesis of sediment yields after wildland fire in different rainfall regimes in the western United States. International Journal of Wildland Fire 18: 96-115]. The ultimate goal of this work is to calibrate and implement the model to accurately predict variability in post-fire sediment yield as a function of future landscape heterogeneity predicted by wildfire simulations, and future landscape fuel treatment scenarios, within watersheds.
NASA Astrophysics Data System (ADS)
Kim, S.; Kim, J.; Prasad, A. K.; Stack, D. H.; El-Askary, H. M.; Kafatos, M.
2012-12-01
Like other ecosystems, agricultural productivity is substantially affected by climate factors. Therefore, accurate climatic data (i.e. precipitation, temperature, and radiation) is crucial to simulating crop yields. In order to understand and anticipate climate change and its impacts on agricultural productivity in the Southwestern United States, the WRF regional climate model (RCM) and the Agricultural Production Systems sIMulator (APSIM) were employed for simulating crop production. 19 years of WRF RCM output show that there is a strong dry bias during the warm season, especially in Arizona. Consequently, the APSIM crop model indicates very low crop yields in this region. We suspect that the coarse resolution of reanalysis data could not resolve the relatively warm Sea Surface Temperature (SST) in the Gulf of California (GC), causing the SST to be up to 10 degrees lower than the climatology. In the Southwestern United States, a significant amount of precipitation is associated with North American Monsoon (NAM). During the monsoon season, the low-level moisture is advected to the Southwestern United States via the GC, which is known to be the dominant moisture source. Thus, high-resolution SST data in the GC is required for RCM simulations to accurately represent a reasonable amount of precipitation in the region, allowing reliable evaluation of the impacts on regional ecosystems.and evaluate impacts on regional ecosystems. To evaluate the influence of SST on agriculture in the Southwestern U.S., two sets of numerical simulations were constructed: a control, using unresolved SST of GC, and daily updated SST data from the MODIS satellite sensor. The meteorological drivers from each of the 6 year RCM runs were provided as input to the APSIM model to determine the crop yield. Analyses of the simulated crop production, and the interannual variation of the meteorological drivers, demonstrate the influence of SST on crop yields in the Southwestern United States.
Payne, Courtney E; Wolfrum, Edward J
2015-01-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots
Gilbert, Hunter B.; Webster, Robert J.
2016-01-01
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.
Gilbert, Hunter B; Webster, Robert J
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.
Cloud screening Coastal Zone Color Scanner images using channel 5
NASA Technical Reports Server (NTRS)
Eckstein, B. A.; Simpson, J. J.
1991-01-01
Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judd, R.C.; Caldwell, H.D.
1985-01-01
The objective of this study was to determine if in-gel chloramine-T radioiodination adequately labels OM proteins to allow for accurate and precise structural comparison of these molecules. Therefore, intrinsically /sup 14/C-amino acid labeled proteins and /sup 125/I-labeled proteins were cleaved with two endopeptidic reagents and the peptide fragments separated by HPLC. A comparison of retention times of the fragments, as determined by differential radiation counting, thus indicated whether /sup 125/Ilabeling identified of all the peptide peaks seen in the /sup 14/Clabeled proteins. Results demonstrated that radioiodination yields complete and accurate information about the primary structure of outer membrane proteins. Inmore » addition, it permits the use of extremely small amounts of protein allowing for method optimization and multiple separations to insure reproducibility.« less
Using an analytical geometry method to improve tiltmeter data presentation
Su, W.-J.
2000-01-01
The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.
Satellite-based assessment of grassland yields
NASA Astrophysics Data System (ADS)
Grant, K.; Siegmund, R.; Wagner, M.; Hartmann, S.
2015-04-01
Cutting date and frequency are important parameters determining grassland yields in addition to the effects of weather, soil conditions, plant composition and fertilisation. Because accurate and area-wide data of grassland yields are currently not available, cutting frequency can be used to estimate yields. In this project, a method to detect cutting dates via surface changes in radar images is developed. The combination of this method with a grassland yield model will result in more reliable and regional-wide numbers of grassland yields. For the test-phase of the monitoring project, a study area situated southeast of Munich, Germany, was chosen due to its high density of managed grassland. For determining grassland cutting robust amplitude change detection techniques are used evaluating radar amplitude or backscatter statistics before and after the cutting event. CosmoSkyMed and Sentinel-1A data were analysed. All detected cuts were verified according to in-situ measurements recorded in a GIS database. Although the SAR systems had various acquisition geometries, the amount of detected grassland cut was quite similar. Of 154 tested grassland plots, covering in total 436 ha, 116 and 111 cuts were detected using CosmoSkyMed and Sentinel-1A radar data, respectively. Further improvement of radar data processes as well as additional analyses with higher sample number and wider land surface coverage will follow for optimisation of the method and for validation and generalisation of the results of this feasibility study. The automation of this method will than allow for an area-wide and cost efficient cutting date detection service improving grassland yield models.
Anomalous dissipation and kinetic-energy distribution in pipes at very high Reynolds numbers.
Chen, Xi; Wei, Bo-Bo; Hussain, Fazle; She, Zhen-Su
2016-01-01
A symmetry-based theory is developed for the description of (streamwise) kinetic energy K in turbulent pipes at extremely high Reynolds numbers (Re's). The theory assumes a mesolayer with continual deformation of wall-attached eddies which introduce an anomalous dissipation, breaking the exact balance between production and dissipation. An outer peak of K is predicted above a critical Re of 10^{4}, in good agreement with experimental data. The theory offers an alternative explanation for the recently discovered logarithmic distribution of K. The concept of anomalous dissipation is further supported by a significant modification of the k-ω equation, yielding an accurate prediction of the entire K profile.
NASA Astrophysics Data System (ADS)
Jain, M.; Singh, B.; Srivastava, A.; Lobell, D. B.
2015-12-01
Food security will be challenged over the upcoming decades due to increased food demand, natural resource degradation, and climate change. In order to identify potential solutions to increase food security in the face of these changes, tools that can rapidly and accurately assess farm productivity are needed. With this aim, we have developed generalizable methods to map crop yields at the field scale using a combination of satellite imagery and crop models, and implement this approach within Google Earth Engine. We use these methods to examine wheat yield trends in Northern India, which provides over 15% of the global wheat supply and where over 80% of farmers rely on wheat as a staple food source. In addition, we identify the extent to which farmers are shifting sow date in response to heat stress, and how well shifting sow date reduces the negative impacts of heat stress on yield. To identify local-level decision-making, we map wheat sow date and yield at a high spatial resolution (30 m) using Landsat satellite imagery from 1980 to the present. This unique dataset allows us to examine sow date decisions at the field scale over 30 years, and by relating these decisions to weather experienced over the same time period, we can identify how farmers learn and adapt cropping decisions based on weather through time.
Using artificial neural network and satellite data to predict rice yield in Bangladesh
NASA Astrophysics Data System (ADS)
Akhand, Kawsar; Nizamuddin, Mohammad; Roytman, Leonid; Kogan, Felix; Goldberg, Mitch
2015-09-01
Rice production in Bangladesh is a crucial part of the national economy and providing about 70 percent of an average citizen's total calorie intake. The demand for rice is constantly rising as the new populations are added in every year in Bangladesh. Due to the increase in population, the cultivation land decreases. In addition, Bangladesh is faced with production constraints such as drought, flooding, salinity, lack of irrigation facilities and lack of modern technology. To maintain self sufficiency in rice, Bangladesh will have to continue to expand rice production by increasing yield at a rate that is at least equal to the population growth until the demand of rice has stabilized. Accurate rice yield prediction is one of the most important challenges in managing supply and demand of rice as well as decision making processes. Artificial Neural Network (ANN) is used to construct a model to predict Aus rice yield in Bangladesh. Advanced Very High Resolution Radiometer (AVHRR)-based remote sensing satellite data vegetation health (VH) indices (Vegetation Condition Index (VCI) and Temperature Condition Index (TCI) are used as input variables and official statistics of Aus rice yield is used as target variable for ANN prediction model. The result obtained with ANN method is encouraging and the error of prediction is less than 10%. Therefore, prediction can play an important role in planning and storing of sufficient rice to face in any future uncertainty.
Chain Ends and the Ultimate Tensile Strength of Polyethylene Fibers
NASA Astrophysics Data System (ADS)
O'Connor, Thomas C.; Robbins, Mark O.
Determining the tensile yield mechanisms of oriented polymer fibers remains a challenging problem in polymer mechanics. By maximizing the alignment and crystallinity of polyethylene (PE) fibers, tensile strengths σ ~ 6 - 7 GPa have been achieved. While impressive, first-principal calculations predict carbon backbone bonds would allow strengths four times higher (σ ~ 20 GPa) before breaking. The reduction in strength is caused by crystal defects like chain ends, which allow fibers to yield by chain slip in addition to bond breaking. We use large scale molecular dynamics (MD) simulations to determine the tensile yield mechanism of orthorhombic PE crystals with finite chains spanning 102 -104 carbons in length. The yield stress σy saturates for long chains at ~ 6 . 3 GPa, agreeing well with experiments. Chains do not break but always yield by slip, after nucleation of 1D dislocations at chain ends. Dislocations are accurately described by a Frenkel-Kontorova model, parametrized by the mechanical properties of an ideal crystal. We compute a dislocation core size ξ = 25 . 24 Å and determine the high and low strain rate limits of σy. Our results suggest characterizing such 1D dislocations is an efficient method for predicting fiber strength. This research was performed within the Center for Materials in Extreme Dynamic Environments (CMEDE) under the Hopkins Extreme Materials Institute at Johns Hopkins University. Financial support was provided by Grant W911NF-12-2-0022.
Bernard R. Parresol; Steven C. Stedman
2004-01-01
The accuracy of forest growth and yield forecasts affects the quality of forest management decisions (Rauscher et al. 2000). Users of growth and yield models want assurance that model outputs are reasonable and mimic local/regional forest structure and composition and accurately reflect the influences of stand dynamics such as competition and disturbance. As such,...
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Helbling, Damian E; Hammes, Frederik; Egli, Thomas; Kohler, Hans-Peter E
2014-02-01
The fundamentals of growth-linked biodegradation occurring at low substrate concentrations are poorly understood. Substrate utilization kinetics and microbial growth yields are two critically important process parameters that can be influenced by low substrate concentrations. Standard biodegradation tests aimed at measuring these parameters generally ignore the ubiquitous occurrence of assimilable organic carbon (AOC) in experimental systems which can be present at concentrations exceeding the concentration of the target substrate. The occurrence of AOC effectively makes biodegradation assays conducted at low substrate concentrations mixed-substrate assays, which can have profound effects on observed substrate utilization kinetics and microbial growth yields. In this work, we introduce a novel methodology for investigating biodegradation at low concentrations by restricting AOC in our experiments. We modified an existing method designed to measure trace concentrations of AOC in water samples and applied it to systems in which pure bacterial strains were growing on pesticide substrates between 0.01 and 50 mg liter(-1). We simultaneously measured substrate concentrations by means of high-performance liquid chromatography with UV detection (HPLC-UV) or mass spectrometry (MS) and cell densities by means of flow cytometry. Our data demonstrate that substrate utilization kinetic parameters estimated from high-concentration experiments can be used to predict substrate utilization at low concentrations under AOC-restricted conditions. Further, restricting AOC in our experiments enabled accurate and direct measurement of microbial growth yields at environmentally relevant concentrations for the first time. These are critical measurements for evaluating the degradation potential of natural or engineered remediation systems. Our work provides novel insights into the kinetics of biodegradation processes and growth yields at low substrate concentrations.
Simard, Valérie; Bernier, Annie; Bélanger, Marie-Ève; Carrier, Julie
2013-06-01
To investigate relations between children's attachment and sleep, using objective and subjective sleep measures. Secondarily, to identify the most accurate actigraphy algorithm for toddlers. 55 mother-child dyads took part in the Strange Situation Procedure (18 months) to assess attachment. At 2 years, children wore an Actiwatch for a 72-hr period, and their mothers completed a sleep diary. The high sensitivity (80) and smoothed actigraphy algorithms provided the most plausible sleep data. Maternal diaries yielded longer estimated sleep duration and shorter wake duration at night and showed poor agreement with actigraphy. More resistant attachment behavior was not associated with actigraphy-assessed sleep, but was associated with longer nocturnal wake duration as estimated by mothers, and with a reduced actigraphy-diary discrepancy. Mothers of children with resistant attachment are more aware of their child's nocturnal awakenings. Researchers and clinicians should select the best sleep measurement method for their specific needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schell, Daniel J.; Dowe, Nancy; Chapeaux, Alexandre
This study explored integrated conversion of corn stover to ethanol and highlights techniques for accurate yield calculations. Acid pretreated corn stover (PCS) produced in a pilot-scale reactor was enzymatically hydrolyzed and the resulting sugars were fermented to ethanol by the glucose–xylose fermenting bacteria, Zymomonas mobilis 8b. The calculations account for high solids operation and oligomeric sugars produced during pretreatment, enzymatic hydrolysis, and fermentation, which, if not accounted for, leads to overestimating ethanol yields. The calculations are illustrated for enzymatic hydrolysis and fermentation of PCS at 17.5% and 20.0% total solids achieving 80.1% and 77.9% conversion of cellulose and xylan tomore » ethanol and ethanol titers of 63 g/L and 69 g/L, respectively. In the future, these techniques, including the TEA results, will be applied to fully integrated pilot-scale runs.« less
Diffusion kinetics of the glucose/glucose oxidase system in swift heavy ion track-based biosensors
NASA Astrophysics Data System (ADS)
Fink, Dietmar; Vacik, Jiri; Hnatowicz, V.; Muñoz Hernandez, G.; Garcia Arrelano, H.; Alfonta, Lital; Kiv, Arik
2017-05-01
For understanding of the diffusion kinetics and their optimization in swift heavy ion track-based biosensors, recently a diffusion simulation was performed. This simulation aimed at yielding the degree of enrichment of the enzymatic reaction products in the highly confined space of the etched ion tracks. A bunch of curves was obtained for the description of such sensors that depend only on the ratio of the diffusion coefficient of the products to that of the analyte within the tracks. As hitherto none of these two diffusion coefficients is accurately known, the present work was undertaken. The results of this paper allow one to quantify the previous simulation and hence yield realistic predictions of glucose-based biosensors. At this occasion, also the influence of the etched track radius on the diffusion coefficients was measured and compared with earlier prediction.
Schell, Daniel J.; Dowe, Nancy; Chapeaux, Alexandre; ...
2016-01-19
This study explored integrated conversion of corn stover to ethanol and highlights techniques for accurate yield calculations. Acid pretreated corn stover (PCS) produced in a pilot-scale reactor was enzymatically hydrolyzed and the resulting sugars were fermented to ethanol by the glucose–xylose fermenting bacteria, Zymomonas mobilis 8b. The calculations account for high solids operation and oligomeric sugars produced during pretreatment, enzymatic hydrolysis, and fermentation, which, if not accounted for, leads to overestimating ethanol yields. The calculations are illustrated for enzymatic hydrolysis and fermentation of PCS at 17.5% and 20.0% total solids achieving 80.1% and 77.9% conversion of cellulose and xylan tomore » ethanol and ethanol titers of 63 g/L and 69 g/L, respectively. In the future, these techniques, including the TEA results, will be applied to fully integrated pilot-scale runs.« less
Phenotyping for drought tolerance of crops in the genomics era
Tuberosa, Roberto
2012-01-01
Improving crops yield under water-limited conditions is the most daunting challenge faced by breeders. To this end, accurate, relevant phenotyping plays an increasingly pivotal role for the selection of drought-resilient genotypes and, more in general, for a meaningful dissection of the quantitative genetic landscape that underscores the adaptive response of crops to drought. A major and universally recognized obstacle to a more effective translation of the results produced by drought-related studies into improved cultivars is the difficulty in properly phenotyping in a high-throughput fashion in order to identify the quantitative trait loci that govern yield and related traits across different water regimes. This review provides basic principles and a broad set of references useful for the management of phenotyping practices for the study and genetic dissection of drought tolerance and, ultimately, for the release of drought-tolerant cultivars. PMID:23049510
Quantitative self-assembly prediction yields targeted nanomedicines
NASA Astrophysics Data System (ADS)
Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.
2018-02-01
Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.
Computed potential energy surfaces for chemical reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Levin, Eugene
1993-01-01
A new global potential energy surface (PES) is being generated for O(P-3) + H2 yields OH + H. This surface is being fit using the rotated Morse oscillator method, which was used to fit the previous POL-CI surface. The new surface is expected to be more accurate and also includes a much more complete sampling of bent geometries. A new study has been undertaken of the reaction N + O2 yields NO + O. The new studies have focused on the region of the surface near a possible minimum corresponding to the peroxy form of NOO. A large portion of the PES for this second reaction has been mapped out. Since state to state cross sections for the reaction are important in the chemistry of high temperature air, these studies will probably be extended to permit generation of a new global potential for reaction.
ANSYS Modeling of Hydrostatic Stress Effects
NASA Technical Reports Server (NTRS)
Allen, Phillip A.
1999-01-01
Classical metal plasticity theory assumes that hydrostatic pressure has no effect on the yield and postyield behavior of metals. Plasticity textbooks, from the earliest to the most modem, infer that there is no hydrostatic effect on the yielding of metals, and even modem finite element programs direct the user to assume the same. The object of this study is to use the von Mises and Drucker-Prager failure theory constitutive models in the finite element program ANSYS to see how well they model conditions of varying hydrostatic pressure. Data is presented for notched round bar (NRB) and "L" shaped tensile specimens. Similar results from finite element models in ABAQUS are shown for comparison. It is shown that when dealing with geometries having a high hydrostatic stress influence, constitutive models that have a functional dependence on hydrostatic stress are more accurate in predicting material behavior than those that are independent of hydrostatic stress.
Tubuxin, Bayaer; Rahimzadeh-Bajgiran, Parinaz; Ginnan, Yusaku; Hosoi, Fumiki; Omasa, Kenji
2015-01-01
This paper illustrates the possibility of measuring chlorophyll (Chl) content and Chl fluorescence parameters by the solar-induced Chl fluorescence (SIF) method using the Fraunhofer line depth (FLD) principle, and compares the results with the standard measurement methods. A high-spectral resolution HR2000+ and an ordinary USB4000 spectrometer were used to measure leaf reflectance under solar and artificial light, respectively, to estimate Chl fluorescence. Using leaves of Capsicum annuum cv. ‘Sven’ (paprika), the relationships between the Chl content and the steady-state Chl fluorescence near oxygen absorption bands of O2B (686nm) and O2A (760nm), measured under artificial and solar light at different growing stages of leaves, were evaluated. The Chl fluorescence yields of ΦF 686nm/ΦF 760nm ratios obtained from both methods correlated well with the Chl content (steady-state solar light: R2 = 0.73; artificial light: R2 = 0.94). The SIF method was less accurate for Chl content estimation when Chl content was high. The steady-state solar-induced Chl fluorescence yield ratio correlated very well with the artificial-light-induced one (R2 = 0.84). A new methodology is then presented to estimate photochemical yield of photosystem II (ΦPSII) from the SIF measurements, which was verified against the standard Chl fluorescence measurement method (pulse-amplitude modulated method). The high coefficient of determination (R2 = 0.74) between the ΦPSII of the two methods shows that photosynthesis process parameters can be successfully estimated using the presented methodology. PMID:26071530
Unmanned aerial systems-based remote sensing for monitoring sorghum growth and development
Shafian, Sanaz; Schnell, Ronnie; Bagavathiannan, Muthukumar; Valasek, John; Shi, Yeyin; Olsenholler, Jeff
2018-01-01
Unmanned Aerial Vehicles and Systems (UAV or UAS) have become increasingly popular in recent years for agricultural research applications. UAS are capable of acquiring images with high spatial and temporal resolutions that are ideal for applications in agriculture. The objective of this study was to evaluate the performance of a UAS-based remote sensing system for quantification of crop growth parameters of sorghum (Sorghum bicolor L.) including leaf area index (LAI), fractional vegetation cover (fc) and yield. The study was conducted at the Texas A&M Research Farm near College Station, Texas, United States. A fixed-wing UAS equipped with a multispectral sensor was used to collect image data during the 2016 growing season (April–October). Flight missions were successfully carried out at 50 days after planting (DAP; 25 May), 66 DAP (10 June) and 74 DAP (18 June). These flight missions provided image data covering the middle growth period of sorghum with a spatial resolution of approximately 6.5 cm. Field measurements of LAI and fc were also collected. Four vegetation indices were calculated using the UAS images. Among those indices, the normalized difference vegetation index (NDVI) showed the highest correlation with LAI, fc and yield with R2 values of 0.91, 0.89 and 0.58 respectively. Empirical relationships between NDVI and LAI and between NDVI and fc were validated and proved to be accurate for estimating LAI and fc from UAS-derived NDVI values. NDVI determined from UAS imagery acquired during the flowering stage (74 DAP) was found to be the most highly correlated with final grain yield. The observed high correlations between UAS-derived NDVI and the crop growth parameters (fc, LAI and grain yield) suggests the applicability of UAS for within-season data collection of agricultural crops such as sorghum. PMID:29715311
Villamor, Grace B.; Nyarko, Benjamin Kofi; Wala, Kperkouma; Akpagana, Koffi
2018-01-01
Vitellaria paradoxa (Gaertn C. F.), or shea tree, remains one of the most valuable trees for farmers in the Atacora district of northern Benin, where rural communities depend on shea products for both food and income. To optimize productivity and management of shea agroforestry systems, or "parklands," accurate and up-to-date data are needed. For this purpose, we monitored120 fruiting shea trees for two years under three land-use scenarios and different soil groups in Atacora, coupled with a farm household survey to elicit information on decision making and management practices. To examine the local pattern of shea tree productivity and relationships between morphological factors and yields, we used a randomized branch sampling method and applied a regression analysis to build a shea yield model based on dendrometric, soil and land-use variables. We also compared potential shea yields based on farm household socio-economic characteristics and management practices derived from the survey data. Soil and land-use variables were the most important determinants of shea fruit yield. In terms of land use, shea trees growing on farmland plots exhibited the highest yields (i.e., fruit quantity and mass) while trees growing on Lixisols performed better than those of the other soil group. Contrary to our expectations, dendrometric parameters had weak relationships with fruit yield regardless of land-use and soil group. There is an inter-annual variability in fruit yield in both soil groups and land-use type. In addition to observed inter-annual yield variability, there was a high degree of variability in production among individual shea trees. Furthermore, household socioeconomic characteristics such as road accessibility, landholding size, and gross annual income influence shea fruit yield. The use of fallow areas is an important land management practice in the study area that influences both conservation and shea yield. PMID:29346406
Aleza, Koutchoukalo; Villamor, Grace B; Nyarko, Benjamin Kofi; Wala, Kperkouma; Akpagana, Koffi
2018-01-01
Vitellaria paradoxa (Gaertn C. F.), or shea tree, remains one of the most valuable trees for farmers in the Atacora district of northern Benin, where rural communities depend on shea products for both food and income. To optimize productivity and management of shea agroforestry systems, or "parklands," accurate and up-to-date data are needed. For this purpose, we monitored120 fruiting shea trees for two years under three land-use scenarios and different soil groups in Atacora, coupled with a farm household survey to elicit information on decision making and management practices. To examine the local pattern of shea tree productivity and relationships between morphological factors and yields, we used a randomized branch sampling method and applied a regression analysis to build a shea yield model based on dendrometric, soil and land-use variables. We also compared potential shea yields based on farm household socio-economic characteristics and management practices derived from the survey data. Soil and land-use variables were the most important determinants of shea fruit yield. In terms of land use, shea trees growing on farmland plots exhibited the highest yields (i.e., fruit quantity and mass) while trees growing on Lixisols performed better than those of the other soil group. Contrary to our expectations, dendrometric parameters had weak relationships with fruit yield regardless of land-use and soil group. There is an inter-annual variability in fruit yield in both soil groups and land-use type. In addition to observed inter-annual yield variability, there was a high degree of variability in production among individual shea trees. Furthermore, household socioeconomic characteristics such as road accessibility, landholding size, and gross annual income influence shea fruit yield. The use of fallow areas is an important land management practice in the study area that influences both conservation and shea yield.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Instrumentation for Studies of Electron Emission and Charging From Insulators
NASA Technical Reports Server (NTRS)
Thomson, C. D.; Zavyalov, V.; Dennison, J. R.
2004-01-01
Making measurements of electron emission properties of insulators is difficult since insulators can charge either negatively or positively under charge particle bombardment. In addition, high incident energies or high fluences can result in modification of a material s conductivity, bulk and surface charge profile, structural makeup through bond breaking and defect creation, and emission properties. We discuss here some of the charging difficulties associated with making insulator-yield measurements and review the methods used in previous studies of electron emission from insulators. We present work undertaken by our group to make consistent and accurate measurements of the electron/ion yield properties for numerous thin-film and thick insulator materials using innovative instrumentation and techniques. We also summarize some of the necessary instrumentation developed for this purpose including fast response, low-noise, high-sensitivity ammeters; signal isolation and interface to standard computer data acquisition apparatus using opto-isolation, sample-and-hold, and boxcar integration techniques; computer control, automation and timing using Labview software; a multiple sample carousel; a pulsed, compact, low-energy, charge neutralization electron flood gun; and pulsed visible and UV light neutralization sources. This work is supported through funding from the NASA Space Environments and Effects Program and the NASA Graduate Research Fellowship Program.
Sentence Recall by Children With SLI Across Two Nonmainstream Dialects of English
McDonald, Janet L.; Seidel, Christy M.; Hegarty, Michael
2016-01-01
Purpose The inability to accurately recall sentences has proven to be a clinical marker of specific language impairment (SLI); this task yields moderate-to-high levels of sensitivity and specificity. However, it is not yet known if these results hold for speakers of dialects whose nonmainstream grammatical productions overlap with those that are produced at high rates by children with SLI. Method Using matched groups of 70 African American English speakers and 36 Southern White English speakers and dialect-strategic scoring, we examined children's sentence recall abilities as a function of their dialect and clinical status (SLI vs. typically developing [TD]). Results For both dialects, the SLI group earned lower sentence recall scores than the TD group with sensitivity and specificity values ranging from .80 to .94, depending on the analysis. Children with SLI, as compared with TD controls, manifested lower levels of verbatim recall, more ungrammatical recalls when the recall was not exact, and higher levels of error on targeted functional categories, especially those marking tense. Conclusion When matched groups are examined and dialect-strategic scoring is used, sentence recall yields moderate-to-high levels of diagnostic accuracy to identify SLI within speakers of nonmainstream dialects of English. PMID:26501934
Experimental study on the dynamic mechanical behaviors of polycarbonate
NASA Astrophysics Data System (ADS)
Zhang, Wei; Gao, Yubo; Cai, Xuanming; Ye, Nan; Huang, Wei; Hypervelocity Impact Research Center Team
2015-06-01
Polycarbonate (PC) is a widely used engineering material in aerospace field, since it has excellent mechanical and optical property. In present study, both compress and tensile tests of PC were conducted at high strain rates by using a split Hopkinson pressure bar. The high-speed camera and 2D digital speckle correlation method (DIC) were used to analyze the dynamic deformation behavior of PC. Meanwhile, the plate impact experiment was carried out to measure the equation of state of PC in a single-stage gas gun, which consists of asymmetric impact technology, manganin gauges, PVDF, electromagnetic particle velocity gauges. The results indicate that the yield stress of PC increased with the strain rates. The strain softening occurred when the stress over yield point except the tensile tests in the strain rates of 1076s-1 and 1279s-1. The ZWT model can describe the constitutive behaviors of PC accurately in different strain rates by contrast with the results of 2D-DIC. At last, The D-u Hugoniot curve of polycarbonate in high pressure was fitted by the least square method. And the final results showed more closely to Cater and Mash than other previous data.
NASA Technical Reports Server (NTRS)
George, Kerry; Wu, Honglu; Willingham, Veronica; Cucinotta, Francis A.
2002-01-01
High-LET radiation is more efficient in producing complex-type chromosome exchanges than sparsely ionizing radiation, and this can potentially be used as a biomarker of radiation quality. To investigate if complex chromosome exchanges are induced by the high-LET component of space radiation exposure, damage was assessed in astronauts' blood lymphocytes before and after long duration missions of 3-4 months. The frequency of simple translocations increased significantly for most of the crewmembers studied. However, there were few complex exchanges detected and only one crewmember had a significant increase after flight. It has been suggested that the yield of complex chromosome damage could be underestimated when analyzing metaphase cells collected at one time point after irradiation, and analysis of chemically-induced PCC may be more accurate since problems with complicated cell-cycle delays are avoided. However, in this case the yields of chromosome damage were similar for metaphase and PCC analysis of astronauts' lymphocytes. It appears that the use of complex-type exchanges as biomarker of radiation quality in vivo after low-dose chronic exposure in mixed radiation fields is hampered by statistical uncertainties.
NASA Astrophysics Data System (ADS)
Adushkin, V. V.
- A statistical procedure is described for estimating the yields of underground nuclear tests at the former Soviet Semipalatinsk test site using the peak amplitudes of short-period surface waves observed at near-regional distances (Δ < 150 km) from these explosions. This methodology is then applied to data recorded from a large sample of the Semipalatinsk explosions, including the Soviet JVE explosion of September 14, 1988, and it is demonstrated that it provides seismic estimates of explosion yield which are typically within 20% of the yields determined for these same explosions using more accurate, non-seismic techniques based on near-source observations.
Analysis of electrophoresis performance
NASA Technical Reports Server (NTRS)
Roberts, Glyn O.
1988-01-01
A flexible efficient computer code is being developed to simulate electrophoretic separation phenomena, in either a cylindrical or a rectangular geometry. The code will computer the evolution in time of the concentrations of an arbitrary number of chemical species, and of the temperature, pH distribution, conductivity, electric field, and fluid motion. Use of nonuniform meshes and fast accurate implicit time-stepping will yield accurate answers at economical cost.
Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine
2013-09-21
For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β(+)-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β(+)-activity and dose is not feasible, a simulation of the expected β(+)-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β(+)-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β(+)-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.
NASA Astrophysics Data System (ADS)
Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine
2013-09-01
For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β+-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β+-activity and dose is not feasible, a simulation of the expected β+-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β+-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β+-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Childs, Bronwen A; Pugliese, Brenna R; Carballo, Cristina T; Miranda, Daniel L; Brainerd, Elizabeth L; Kirker-Head, Carl A
2017-07-20
X-ray reconstruction of moving morphology (XROMM) uses biplanar videoradiography and computed tomography (CT) scanning to capture three-dimensional (3D) bone motion. In XROMM, morphologically accurate 3D bone models derived from CT are animated with motion from videoradiography, yielding a highly accurate and precise reconstruction of skeletal kinematics. We employ this motion analysis technique to characterize metacarpophalangeal joint (MCPJ) motion in the absence and presence of protective legwear in a healthy pony. Our in vivo marker tracking precision was 0.09 mm for walk and trot, and 0.10 mm during jump down exercises. We report MCPJ maximum extension (walk: -27.70 ± 2.78° [standard deviation]; trot: -33.84 ± 4.94°), abduction/adduction (walk: 0.04 ± 0.24°; trot: -0.23 ± 0.35°) and external/internal rotations (walk: 0.30 ± 0.32°; trot: -0.49 ± 1.05°) indicating that the MCPJ in this pony is a stable hinge joint with negligible extra-sagittal rotations. No substantial change in MCPJ maximum extension angles or vertical ground reaction forces (GRFv) were observed upon application of legwear during jump down exercise. Neoprene boot application yielded -65.20 ± 2.06° extension (GRFv = 11.97 ± 0.67 N/kg) and fleece polo wrap application yielded -64.23 ± 1.68° extension (GRFv = 11.36 ± 1.66 N/kg), when compared to naked control (-66.11 ± 0.96°; GRFv = 12.02 ± 0.53 N/kg). Collectively, this proof of concept study illustrates the benefits and practical limitations of using XROMM to document equine MCPJ kinematics in the presence and absence of legwear.
Modelling the effect of shear strength on isentropic compression experiments
NASA Astrophysics Data System (ADS)
Thomson, Stuart; Howell, Peter; Ockendon, John; Ockendon, Hilary
2017-01-01
Isentropic compression experiments (ICE) are a way of obtaining equation of state information for metals undergoing violent plastic deformation. In a typical experiment, millimetre thick metal samples are subjected to pressures on the order of 10 - 102 GPa, while the yield strength of the material can be as low as 10-2 GPa. The analysis of such experiments has so far neglected the effect of shear strength, instead treating the highly plasticised metal as an inviscid compressible fluid. However making this approximation belies the basic elastic nature of a solid object. A more accurate method should strive to incorporate the small but measurable effects of shear strength. Here we present a one-dimensional mathematical model for elastoplasticity at high stress which allows for both compressibility and the shear strength of the material. In the limit of zero yield stress this model reproduces the hydrodynamic models currently used to analyse ICEs. Numerical solutions of the governing equations will then be presented for problems relevant to ICEs in order to investigate the effects of shear strength compared with a model based purely on hydrodynamics.
Barkla, Bronwyn J.
2016-01-01
Modern day agriculture practice is narrowing the genetic diversity in our food supply. This may compromise the ability to obtain high yield under extreme climactic conditions, threatening food security for a rapidly growing world population. To identify genetic diversity, tolerance mechanisms of cultivars, landraces and wild relatives of major crops can be identified and ultimately exploited for yield improvement. Quantitative proteomics allows for the identification of proteins that may contribute to tolerance mechanisms by directly comparing protein abundance under stress conditions between genotypes differing in their stress responses. In this review, a summary is provided of the data accumulated from quantitative proteomic comparisons of crop genotypes/cultivars which present different stress tolerance responses when exposed to various abiotic stress conditions, including drought, salinity, high/low temperature, nutrient deficiency and UV-B irradiation. This field of research aims to identify molecular features that can be developed as biomarkers for crop improvement, however without accurate phenotyping, careful experimental design, statistical robustness and appropriate biomarker validation and verification it will be challenging to deliver what is promised. PMID:28248236
Barkla, Bronwyn J
2016-09-08
Modern day agriculture practice is narrowing the genetic diversity in our food supply. This may compromise the ability to obtain high yield under extreme climactic conditions, threatening food security for a rapidly growing world population. To identify genetic diversity, tolerance mechanisms of cultivars, landraces and wild relatives of major crops can be identified and ultimately exploited for yield improvement. Quantitative proteomics allows for the identification of proteins that may contribute to tolerance mechanisms by directly comparing protein abundance under stress conditions between genotypes differing in their stress responses. In this review, a summary is provided of the data accumulated from quantitative proteomic comparisons of crop genotypes/cultivars which present different stress tolerance responses when exposed to various abiotic stress conditions, including drought, salinity, high/low temperature, nutrient deficiency and UV-B irradiation. This field of research aims to identify molecular features that can be developed as biomarkers for crop improvement, however without accurate phenotyping, careful experimental design, statistical robustness and appropriate biomarker validation and verification it will be challenging to deliver what is promised.
Tekin, Eylul; Roediger, Henry L
2017-01-01
Researchers use a wide range of confidence scales when measuring the relationship between confidence and accuracy in reports from memory, with the highest number usually representing the greatest confidence (e.g., 4-point, 20-point, and 100-point scales). The assumption seems to be that the range of the scale has little bearing on the confidence-accuracy relationship. In two old/new recognition experiments, we directly investigated this assumption using word lists (Experiment 1) and faces (Experiment 2) by employing 4-, 5-, 20-, and 100-point scales. Using confidence-accuracy characteristic (CAC) plots, we asked whether confidence ratings would yield similar CAC plots, indicating comparability in use of the scales. For the comparisons, we divided 100-point and 20-point scales into bins of either four or five and asked, for example, whether confidence ratings of 4, 16-20, and 76-100 would yield similar values. The results show that, for both types of material, the different scales yield similar CAC plots. Notably, when subjects express high confidence, regardless of which scale they use, they are likely to be very accurate (even though they studied 100 words and 50 faces in each list in 2 experiments). The scales seem convertible from one to the other, and choice of scale range probably does not affect research into the relationship between confidence and accuracy. High confidence indicates high accuracy in recognition in the present experiments.
NASA Astrophysics Data System (ADS)
Wang, Fangzhou; Chen, Wanjun; Wang, Zeheng; Sun, Ruize; Wei, Jin; Li, Xuan; Shi, Yijun; Jin, Xiaosheng; Xu, Xiaorui; Chen, Nan; Zhou, Qi; Zhang, Bo
2017-05-01
To achieve uniform low turn-on voltage and high reverse blocking capability, an AlGaN/GaN power field effect rectifier with trench heterojunction anode (THA-FER) is proposed and investigated in this work which includes only simulated data and no real experimental result. VT has a low saturation value when trench height (HT) is beyond 300 nm, confirming it is possible to control the VT accurately without precisely controlling the HT in the THA-FER. Meanwhile, high HT anode reduces reverse leakage current and yields high breakdown voltage (VB). A superior high Baliga's Figure of Merits (BFOM = VB2/Ron,sp, Ron,sp is specific-on resistance) of 1228 MW/cm2 reveals the THA-FER caters for the demands of high efficiency GaN power applications.
NASA Astrophysics Data System (ADS)
Zhang, M.; Nunes, V. D.; Burbey, T. J.; Borggaard, J.
2012-12-01
More than 1.5 m of subsidence has been observed in Las Vegas Valley since 1935 as a result of groundwater pumping that commenced in 1905 (Bell, 2002). The compaction of the aquifer system has led to several large subsidence bowls and deleterious earth fissures. The highly heterogeneous aquifer system with its variably thick interbeds makes predicting the magnitude and location of subsidence extremely difficult. Several numerical groundwater flow models of the Las Vegas basin have been previously developed; however none of them have been able to accurately simulate the observed subsidence patterns or magnitudes because of inadequate parameterization. To better manage groundwater resources and predict future subsidence we have updated and developed a more accurate groundwater management model for Las Vegas Valley by developing a new adjoint parameter estimation package (APE) that is used in conjunction with UCODE along with MODFLOW and the SUB (subsidence) and HFB (horizontal flow barrier) packages. The APE package is used with UCODE to automatically identify suitable parameter zonations and inversely calculate parameter values from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Ske) and inelastic (Skv) storage coefficients. With the advent of InSAR (Interferometric synthetic aperture radar), distributed spatial and temporal subsidence measurements can be obtained, which greatly enhance the accuracy of parameter estimation. This automation process can remove user bias and provide a far more accurate and robust parameter zonation distribution. The outcome of this work yields a more accurate and powerful tool for managing groundwater resources in Las Vegas Valley to date.
Hwang, Hamish; Marsh, Ian; Doyle, Jason
2014-01-01
Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
An evaluation of the lamb vision system as a predictor of lamb carcass red meat yield percentage.
Brady, A S; Belk, K E; LeValley, S B; Dalsted, N L; Scanga, J A; Tatum, J D; Smith, G C
2003-06-01
An objective method for predicting red meat yield in lamb carcasses is needed to accurately assess true carcass value. This study was performed to evaluate the ability of the lamb vision system (LVS; Research Management Systems USA, Fort Collins, CO) to predict fabrication yields of lamb carcasses. Lamb carcasses (n = 246) were evaluated using LVS and hot carcass weight (HCW), as well as by USDA expert and on-line graders, before fabrication of carcass sides to either bone-in or boneless cuts. On-line whole number, expert whole-number, and expert nearest-tenth USDA yield grades and LVS + HCW estimates accounted for 53, 52, 58, and 60%, respectively, of the observed variability in boneless, saleable meat yields, and accounted for 56, 57, 62, and 62%, respectively, of the variation in bone-in, saleable meat yields. The LVS + HCW system predicted 77, 65, 70, and 87% of the variation in weights of boneless shoulders, racks, loins, and legs, respectively, and 85, 72, 75, and 86% of the variation in weights of bone-in shoulders, racks, loins, and legs, respectively. Addition of longissimus muscle area (REA), adjusted fat thickness (AFT), or both REA and AFT to LVS + HCW models resulted in improved prediction of boneless saleable meat yields by 5, 3, and 5 percentage points, respectively. Bone-in, saleable meat yield estimations were improved in predictive accuracy by 7.7, 6.6, and 10.1 percentage points, and in precision, when REA alone, AFT alone, or both REA and AFT, respectively, were added to the LVS + HCW output models. Use of LVS + HCW to predict boneless red meat yields of lamb carcasses was more accurate than use of current on-line whole-number, expert whole-number, or expert nearest-tenth USDA yield grades. Thus, LVS + HCW output, when used alone or in combination with AFT and/or REA, improved on-line estimation of boneless cut yields from lamb carcasses. The ability of LVS + HCW to predict yields of wholesale cuts suggests that LVS could be used as an objective means for pricing carcasses in a value-based marketing system.
van Schie, H T; Bakker, E M; Jonker, A M; van Weeren, P R
2001-07-01
To evaluate effectiveness of computerized discrimination between structure-related and non-structure-related echoes in ultrasonographic images for quantitative evaluation of tendon structural integrity in horses. 4 superficial digital flexor tendons (2 damaged tendons, 2 normal tendons). Transverse ultrasonographic images that precisely matched histologic sections were obtained in fixed steps along the long axis of each tendon. Distribution, intensity, and delineation of structure-related echoes, quantitatively expressed as the correlation ratio and steadiness ratio , were compared with histologic findings in tissue that was normal or had necrosis, early granulation, late granulation, early fibrosis, or inferior repair. In normal tendon, the even distribution of structure-related echoes with high intensity and sharp delineation yielded high correlation ratio and steadiness ratio. In areas of necrosis, collapsed endotendon septa yielded solid but blurred structure-related echoes (high correlation ration and low steadiness ratio). In early granulation tissue, complete lack of organization caused zero values for both ratios. In late granulation tissue, reorganization and swollen endotendon septa yielded poorly delineated structure-related echoes (high correlation ratio, low steadiness ratio). In early fibrosis, rearrangement of bundles resulted in normal correlation ration and slightly low steadiness ratio. In inferior repair, the almost complete lack of structural reorganization resulted in heterogeneous poorly delineated low-intensity echoes (low correlation ratio and steadiness ratio). The combination of correlation ratio and steadiness ratio accurately reflects histopathologic findings, making computerized correlation of ultrasonographic images an efficient tool for quantitative evaluation of tendon structural integrity.
Energy balance and mass conservation in reduced order models of fluid flows
NASA Astrophysics Data System (ADS)
Mohebujjaman, Muhammad; Rebholz, Leo G.; Xie, Xuping; Iliescu, Traian
2017-10-01
In this paper, we investigate theoretically and computationally the conservation properties of reduced order models (ROMs) for fluid flows. Specifically, we investigate whether the ROMs satisfy the same (or similar) energy balance and mass conservation as those satisfied by the Navier-Stokes equations. All of our theoretical findings are illustrated and tested in numerical simulations of a 2D flow past a circular cylinder at a Reynolds number Re = 100. First, we investigate the ROM energy balance. We show that using the snapshot average for the centering trajectory (which is a popular treatment of nonhomogeneous boundary conditions in ROMs) yields an incorrect energy balance. Then, we propose a new approach, in which we replace the snapshot average with the Stokes extension. Theoretically, the Stokes extension produces an accurate energy balance. Numerically, the Stokes extension yields more accurate results than the standard snapshot average, especially for longer time intervals. Our second contribution centers around ROM mass conservation. We consider ROMs created using two types of finite elements: the standard Taylor-Hood (TH) element, which satisfies the mass conservation weakly, and the Scott-Vogelius (SV) element, which satisfies the mass conservation pointwise. Theoretically, the error estimates for the SV-ROM are sharper than those for the TH-ROM. Numerically, the SV-ROM yields significantly more accurate results, especially for coarser meshes and longer time intervals.
Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22
the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.
Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers
NASA Technical Reports Server (NTRS)
Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.
2012-01-01
Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.
Are artificial opals non-close-packed fcc structures?
NASA Astrophysics Data System (ADS)
García-Santamaría, F.; Braun, P. V.
2007-06-01
The authors report a simple experimental method to accurately measure the volume fraction of artificial opals. The results are modeled using several methods, and they find that some of the most common yield very inaccurate results. Both finite size and substrate effects play an important role in calculations of the volume fraction. The experimental results show that the interstitial pore volume is 4%-15% larger than expected for close-packed structures. Consequently, calculations performed in previous work relating the amount of material synthesized in the opal interstices with the optical properties may need revision, especially in the case of high refractive index materials.
Temperature-Dependent Kinetic Model for Nitrogen-Limited Wine Fermentations▿
Coleman, Matthew C.; Fish, Russell; Block, David E.
2007-01-01
A physical and mathematical model for wine fermentation kinetics was adapted to include the influence of temperature, perhaps the most critical factor influencing fermentation kinetics. The model was based on flask-scale white wine fermentations at different temperatures (11 to 35°C) and different initial concentrations of sugar (265 to 300 g/liter) and nitrogen (70 to 350 mg N/liter). The results show that fermentation temperature and inadequate levels of nitrogen will cause stuck or sluggish fermentations. Model parameters representing cell growth rate, sugar utilization rate, and the inactivation rate of cells in the presence of ethanol are highly temperature dependent. All other variables (yield coefficient of cell mass to utilized nitrogen, yield coefficient of ethanol to utilized sugar, Monod constant for nitrogen-limited growth, and Michaelis-Menten-type constant for sugar transport) were determined to vary insignificantly with temperature. The resulting mathematical model accurately predicts the observed wine fermentation kinetics with respect to different temperatures and different initial conditions, including data from fermentations not used for model development. This is the first wine fermentation model that accurately predicts a transition from sluggish to normal to stuck fermentations as temperature increases from 11 to 35°C. Furthermore, this comprehensive model provides insight into combined effects of time, temperature, and ethanol concentration on yeast (Saccharomyces cerevisiae) activity and physiology. PMID:17616615
Meyer, Andreas L S; Wiens, John J
2018-01-01
Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Payne, Courtney E.; Wolfrum, Edward J.
2015-03-12
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Courtney E.; Wolfrum, Edward J.
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hydrogeological Controls on Regional-Scale Indirect Nitrous Oxide Emission Factors for Rivers.
Cooper, Richard J; Wexler, Sarah K; Adams, Christopher A; Hiscock, Kevin M
2017-09-19
Indirect nitrous oxide (N 2 O) emissions from rivers are currently derived using poorly constrained default IPCC emission factors (EF 5r ) which yield unreliable flux estimates. Here, we demonstrate how hydrogeological conditions can be used to develop more refined regional-scale EF 5r estimates required for compiling accurate national greenhouse gas inventories. Focusing on three UK river catchments with contrasting bedrock and superficial geologies, N 2 O and nitrate (NO 3 - ) concentrations were analyzed in 651 river water samples collected from 2011 to 2013. Unconfined Cretaceous Chalk bedrock regions yielded the highest median N 2 O-N concentration (3.0 μg L -1 ), EF 5r (0.00036), and N 2 O-N flux (10.8 kg ha -1 a -1 ). Conversely, regions of bedrock confined by glacial deposits yielded significantly lower median N 2 O-N concentration (0.8 μg L -1 ), EF 5r (0.00016), and N 2 O-N flux (2.6 kg ha -1 a -1 ), regardless of bedrock type. Bedrock permeability is an important control in regions where groundwater is unconfined, with a high N 2 O yield from high permeability chalk contrasting with significantly lower median N 2 O-N concentration (0.7 μg L -1 ), EF 5r (0.00020), and N 2 O-N flux (2.0 kg ha -1 a -1 ) on lower permeability unconfined Jurassic mudstone. The evidence presented here demonstrates EF 5r can be differentiated by hydrogeological conditions and thus provide a valuable proxy for generating improved regional-scale N 2 O emission estimates.
Hunt, Alison C; Ek, Mattias; Schönbächler, Maria
2017-12-01
This study presents a new measurement procedure for the isolation of Pt from iron meteorite samples. The method also allows for the separation of Pd from the same sample aliquot. The separation entails a two-stage anion-exchange procedure. In the first stage, Pt and Pd are separated from each other and from major matrix constituents including Fe and Ni. In the second stage, Ir is reduced with ascorbic acid and eluted from the column before Pt collection. Platinum yields for the total procedure were typically 50-70%. After purification, high-precision Pt isotope determinations were performed by multi-collector ICP-MS. The precision of the new method was assessed using the IIAB iron meteorite North Chile. Replicate analyses of multiple digestions of this material yielded an intermediate precision for the measurement results of 0.73 for ε 192 Pt, 0.15 for ε 194 Pt and 0.09 for ε 196 Pt (2 standard deviations). The NIST SRM 3140 Pt solution reference material was passed through the measurement procedure and yielded an isotopic composition that is identical to the unprocessed Pt reference material. This indicates that the new technique is unbiased within the limit of the estimated uncertainties. Data for three iron meteorites support that Pt isotope variations in these samples are due to exposure to galactic cosmic rays in space.
2014-01-01
Controlling harmful algae blooms (HABs) using microbial algicides is cheap, efficient and environmental-friendly. However, obtaining high yield of algicidal microbes to meet the need of field test is still a big challenge since qualitative and quantitative analysis of algicidal compounds is difficult. In this study, we developed a protocol to increase the yield of both biomass and algicidal compound present in a novel algicidal actinomycete Streptomyces alboflavus RPS, which kills Phaeocystis globosa. To overcome the problem in algicidal compound quantification, we chose algicidal ratio as the index and used artificial neural network to fit the data, which was appropriate for this nonlinear situation. In this protocol, we firstly determined five main influencing factors through single factor experiments and generated the multifactorial experimental groups with a U15(155) uniform-design-table. Then, we used the traditional quadratic polynomial stepwise regression model and an accurate, fully optimized BP-neural network to simulate the fermentation. Optimized with genetic algorithm and verified using experiments, we successfully increased the algicidal ratio of the fermentation broth by 16.90% and the dry mycelial weight by 69.27%. These results suggested that this newly developed approach is a viable and easy way to optimize the fermentation conditions for algicidal microorganisms. PMID:24886410
Cai, Guanjing; Zheng, Wei; Yang, Xujun; Zhang, Bangzhou; Zheng, Tianling
2014-05-24
Controlling harmful algae blooms (HABs) using microbial algicides is cheap, efficient and environmental-friendly. However, obtaining high yield of algicidal microbes to meet the need of field test is still a big challenge since qualitative and quantitative analysis of algicidal compounds is difficult. In this study, we developed a protocol to increase the yield of both biomass and algicidal compound present in a novel algicidal actinomycete Streptomyces alboflavus RPS, which kills Phaeocystis globosa. To overcome the problem in algicidal compound quantification, we chose algicidal ratio as the index and used artificial neural network to fit the data, which was appropriate for this nonlinear situation. In this protocol, we firstly determined five main influencing factors through single factor experiments and generated the multifactorial experimental groups with a U15(155) uniform-design-table. Then, we used the traditional quadratic polynomial stepwise regression model and an accurate, fully optimized BP-neural network to simulate the fermentation. Optimized with genetic algorithm and verified using experiments, we successfully increased the algicidal ratio of the fermentation broth by 16.90% and the dry mycelial weight by 69.27%. These results suggested that this newly developed approach is a viable and easy way to optimize the fermentation conditions for algicidal microorganisms.
Wang, Ying; Wu, Rong Jun; Guo, Zhao Bing
2016-05-01
Based on the modeled products of actual evapotranspiration with NOAH land surface model, the temporal and spatial variations of actual evapotranspiration were analyzed for the Huang-Huai-Hai region in 2002-2010. In the meantime, the agricultural drought index, namely, drought severity index (DSI) was constructed, incorporated with products of MOD17 potential evapotranspiration and MOD13 NDVI. Furthermore, the applicability of established DSI in this region in the whole year of 2002 was investigated based on the Palmer drought severity index (PDSI), the yield reduction rate of winter wheat, and drought severity data. The results showed that the annual average actual evapotranspiration within the survey region increased from the northwest to the southeast, with the maximum of 800-900 mm in the southeast and the minimum less than 300 mm in the northwest. The DSI and PDSI had positive correlation (R 2 =0.61) and high concordance in change trend. They all got the low point (-0.61 and -1.33) in 2002 and reached the peak (0.81 and 0.92) in 2003. The correlation between DSI and yield reduction rate of winter wheat (R 2 =0.43) was more significant than that between PDSI and yield reduction rate of winter wheat (R 2 =0.06). So, the DSI reflected a high spatial resolution of drought pattern and could reflect the region agricultural drought severity and intensity more accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.
2010-08-15
In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less
Hyperspectral sensing to detect the impact of herbicide drift on cotton growth and yield
NASA Astrophysics Data System (ADS)
Suarez, L. A.; Apan, A.; Werth, J.
2016-10-01
Yield loss in crops is often associated with plant disease or external factors such as environment, water supply and nutrient availability. Improper agricultural practices can also introduce risks into the equation. Herbicide drift can be a combination of improper practices and environmental conditions which can create a potential yield loss. As traditional assessment of plant damage is often imprecise and time consuming, the ability of remote and proximal sensing techniques to monitor various bio-chemical alterations in the plant may offer a faster, non-destructive and reliable approach to predict yield loss caused by herbicide drift. This paper examines the prediction capabilities of partial least squares regression (PLS-R) models for estimating yield. Models were constructed with hyperspectral data of a cotton crop sprayed with three simulated doses of the phenoxy herbicide 2,4-D at three different growth stages. Fibre quality, photosynthesis, conductance, and two main hormones, indole acetic acid (IAA) and abscisic acid (ABA) were also analysed. Except for fibre quality and ABA, Spearman correlations have shown that these variables were highly affected by the chemical. Four PLS-R models for predicting yield were developed according to four timings of data collection: 2, 7, 14 and 28 days after the exposure (DAE). As indicated by the model performance, the analysis revealed that 7 DAE was the best time for data collection purposes (RMSEP = 2.6 and R2 = 0.88), followed by 28 DAE (RMSEP = 3.2 and R2 = 0.84). In summary, the results of this study show that it is possible to accurately predict yield after a simulated herbicide drift of 2,4-D on a cotton crop, through the analysis of hyperspectral data, thereby providing a reliable, effective and non-destructive alternative based on the internal response of the cotton leaves.
NASA Astrophysics Data System (ADS)
Draper, D. C.; Farmer, D. K.; Desyaterik, Y.; Fry, J. L.
2015-11-01
The effect of NO2 on secondary organic aerosol (SOA) formation from ozonolysis of α-pinene, β-pinene, Δ3-carene, and limonene was investigated using a dark flow-through reaction chamber. SOA mass yields were calculated for each monoterpene from ozonolysis with varying NO2 concentrations. Kinetics modeling of the first-generation gas-phase chemistry suggests that differences in observed aerosol yields for different NO2 concentrations are consistent with NO3 formation and subsequent competition between O3 and NO3 to oxidize each monoterpene. α-Pinene was the only monoterpene studied that showed a systematic decrease in both aerosol number concentration and mass concentration with increasing [NO2]. β-Pinene and Δ3-carene produced fewer particles at higher [NO2], but both retained moderate mass yields. Limonene exhibited both higher number concentrations and greater mass concentrations at higher [NO2]. SOA from each experiment was collected and analyzed by HPLC-ESI-MS, enabling comparisons between product distributions for each system. In general, the systems influenced by NO3 oxidation contained more high molecular weight products (MW > 400 amu), suggesting the importance of oligomerization mechanisms in NO3-initiated SOA formation. α-Pinene, which showed anomalously low aerosol mass yields in the presence of NO2, showed no increase in these oligomer peaks, suggesting that lack of oligomer formation is a likely cause of α-pinene's near 0 % yields with NO3. Through direct comparisons of mixed-oxidant systems, this work suggests that NO3 is likely to dominate nighttime oxidation pathways in most regions with both biogenic and anthropogenic influences. Therefore, accurately constraining SOA yields from NO3 oxidation, which vary substantially with the volatile organic compound precursor, is essential in predicting nighttime aerosol production.
NASA Astrophysics Data System (ADS)
Draper, D. C.; Farmer, D. K.; Desyaterik, Y.; Fry, J. L.
2015-05-01
The effect of NO2 on secondary organic aerosol (SOA) formation from ozonolysis of α-pinene, β-pinene, Δ3-carene, and limonene was investigated using a dark flow-through reaction chamber. SOA mass yields were calculated for each monoterpene from ozonolysis with varying NO2 concentrations. Kinetics modeling of the first generation gas-phase chemistry suggests that differences in observed aerosol yields for different NO2 concentrations are consistent with NO3 formation and subsequent competition between O3 and NO3 to oxidize each monoterpene. α-pinene was the only monoterpene studied that showed a systematic decrease in both aerosol number concentration and mass concentration with increasing [NO2]. β-pinene and Δ3-carene produced fewer particles at higher [NO2], but both retained moderate mass yields. Limonene exhibited both higher number concentrations and greater mass concentrations at higher [NO2]. SOA from each experiment was collected and analyzed by HPLC-ESI-MS, enabling comparisons between product distributions for each system. In general, the systems influenced by NO3 oxidation contained more high molecular weight products (MW >400 amu), suggesting the importance of oligomerization mechanisms in NO3-initiated SOA formation. α-pinene, which showed anomalously low aerosol mass yields in the presence of NO2, showed no increase in these oligomer peaks, suggesting that lack of oligomer formation is a likely cause of α-pinene's near 0% yields with NO3. Through direct comparisons of mixed-oxidant systems, this work suggests that NO3 is likely to dominate nighttime oxidation pathways in most regions with both biogenic and anthropogenic influences. Therefore, accurately constraining SOA yields from NO3 oxidation, which vary substantially with the VOC precursor, is essential in predicting nighttime aerosol production.
Ensembles modeling approach to study Climate Change impacts on Wheat
NASA Astrophysics Data System (ADS)
Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart
2017-04-01
Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.
The yield and decay coefficients of exoelectrogenic bacteria in bioelectrochemical systems.
Wilson, Erica L; Kim, Younggy
2016-05-01
In conventional wastewater treatment, waste sludge management and disposal contribute the major cost for wastewater treatment. Bioelectrochemical systems, as a potential alternative for future wastewater treatment and resources recovery, are expected to produce small amounts of waste sludge because exoelectrogenic bacteria grow on anaerobic respiration and form highly populated biofilms on bioanode surfaces. While waste sludge production is governed by the yield and decay coefficient, none of previous studies have quantified these kinetic constants for exoelectrogenic bacteria. For yield coefficient estimation, we modified McCarty's free energy-based model by using the bioanode potential for the free energy of the electron acceptor reaction. The estimated true yield coefficient ranged 0.1 to 0.3 g-VSS (volatile suspended solids) g-COD(-1) (chemical oxygen demand), which is similar to that of most anaerobic microorganisms. The yield coefficient was sensitively affected by the bioanode potential and pH while the substrate and bicarbonate concentrations had relatively minor effects on the yield coefficient. In lab-scale experiments using microbial electrolysis cells, the observed yield coefficient (including the effect of cell decay) was found to be 0.020 ± 0.008 g-VSS g-COD(-1), which is an order of magnitude smaller than the theoretical estimation. Based on the difference between the theoretical and experimental results, the decay coefficient was approximated to be 0.013 ± 0.002 d(-1). These findings indicate that bioelectrochemical systems have potential for future wastewater treatment with reduced waste sludge as well as for resources recovery. Also, the found kinetic information will allow accurate estimation of wastewater treatment performance in bioelectrochemical systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Surface flow measurements from drones
NASA Astrophysics Data System (ADS)
Tauro, Flavia; Porfiri, Maurizio; Grimaldi, Salvatore
2016-09-01
Drones are transforming the way we sense and interact with the environment. However, despite their increased capabilities, the use of drones in geophysical sciences usually focuses on image acquisition for generating high-resolution maps. Motivated by the increasing demand for innovative and high performance geophysical observational methodologies, we posit the integration of drone technology and optical sensing toward a quantitative characterization of surface flow phenomena. We demonstrate that a recreational drone can be used to yield accurate surface flow maps of sub-meter water bodies. Specifically, drone's vibrations do not hinder surface flow observations, and velocity measurements are in agreement with traditional techniques. This first instance of quantitative water flow sensing from a flying drone paves the way to novel observations of the environment.
Performance and Results for Quartz Detector for the SuperHMS Spectrometer at Hall C Jefferson Lab
NASA Astrophysics Data System (ADS)
Griego, Benjamin F., Jr.
A quartz detector has been constructed to be part of the trigger system for the Super High Momentum Spectrometer (SHMS). The SHMS will play a pivotal role in carrying out the 12 -- GeV physics program at Hal -- C Jefferson Lab. The quartz hodoscope consists of twenty one fused silica bars. Each bar is 125 cm long, 5.5 cm wide, 2.5 cm thick, and is viewed by a UV -- sensitive PMT on each end. The quartz hodoscope's task is to provide a clean detection of charged particles, a high level of background suppression, and an accurate tracking efficiency determination. Initial test results of the quartz detectors which include light yield and position resolution will be presented.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
NASA Astrophysics Data System (ADS)
Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid
2015-07-01
The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.
USDA-ARS?s Scientific Manuscript database
This study assesses the ability of 21 crop models to capture the impact of elevated CO2 concentration ([CO218 ]) on maize yield and water use as measured in a 2-year Free Air Carbon dioxide Enrichment experiment conducted at the Thünen Institute in Braunschweig, Germany (Manderscheid et al. 2014). D...
Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation
Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon
2005-01-01
Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725
Satellite-based assessment of yield variation and its determinants in smallholder African systems
Lobell, David B.
2017-01-01
The emergence of satellite sensors that can routinely observe millions of individual smallholder farms raises possibilities for monitoring and understanding agricultural productivity in many regions of the world. Here we demonstrate the potential to track smallholder maize yield variation in western Kenya, using a combination of 1-m Terra Bella imagery and intensive field sampling on thousands of fields over 2 y. We find that agreement between satellite-based and traditional field survey-based yield estimates depends significantly on the quality of the field-based measures, with agreement highest (R2 up to 0.4) when using precise field measures of plot area and when using larger fields for which rounding errors are smaller. We further show that satellite-based measures are able to detect positive yield responses to fertilizer and hybrid seed inputs and that the inferred responses are statistically indistinguishable from estimates based on survey-based yields. These results suggest that high-resolution satellite imagery can be used to make predictions of smallholder agricultural productivity that are roughly as accurate as the survey-based measures traditionally used in research and policy applications, and they indicate a substantial near-term potential to quickly generate useful datasets on productivity in smallholder systems, even with minimal or no field training data. Such datasets could rapidly accelerate learning about which interventions in smallholder systems have the most positive impact, thus enabling more rapid transformation of rural livelihoods. PMID:28202728
Mechanical and hydraulic properties of Nankai accretionary prism sediments: Effect of stress path
NASA Astrophysics Data System (ADS)
Kitajima, Hiroko; Chester, Frederick M.; Biscontin, Giovanna
2012-10-01
We have conducted triaxial deformation experiments along different loading paths on prism sediments from the Nankai Trough. Different load paths of isotropic loading, uniaxial strain loading, triaxial compression (at constant confining pressure, Pc), undrained Pc reduction, drained Pc reduction, and triaxial unloading at constant Pc, were used to understand the evolution of mechanical and hydraulic properties under complicated stress states and loading histories in accretionary subduction zones. Five deformation experiments were conducted on three sediment core samples for the Nankai prism, specifically from older accreted sediments at the forearc basin, underthrust slope sediments beneath the megasplay fault, and overthrust Upper Shikoku Basin sediments along the frontal thrust. Yield envelopes for each sample were constructed based on the stress paths of Pc-reduction using the modified Cam-clay model, and in situ stress states of the prism were constrained using the results from the other load paths and accounting for horizontal stress. Results suggest that the sediments in the vicinity of the megasplay fault and frontal thrust are highly overconsolidated, and thus likely to deform brittle rather than ductile. The porosity of sediments decreases as the yield envelope expands, while the reduction in permeability mainly depends on the effective mean stress before yield, and the differential stress after yield. An improved understanding of sediment yield strength and hydromechanical properties along different load paths is necessary to treat accurately the coupling of deformation and fluid flow in accretionary subduction zones.
Satellite-based assessment of yield variation and its determinants in smallholder African systems.
Burke, Marshall; Lobell, David B
2017-02-28
The emergence of satellite sensors that can routinely observe millions of individual smallholder farms raises possibilities for monitoring and understanding agricultural productivity in many regions of the world. Here we demonstrate the potential to track smallholder maize yield variation in western Kenya, using a combination of 1-m Terra Bella imagery and intensive field sampling on thousands of fields over 2 y. We find that agreement between satellite-based and traditional field survey-based yield estimates depends significantly on the quality of the field-based measures, with agreement highest ([Formula: see text] up to 0.4) when using precise field measures of plot area and when using larger fields for which rounding errors are smaller. We further show that satellite-based measures are able to detect positive yield responses to fertilizer and hybrid seed inputs and that the inferred responses are statistically indistinguishable from estimates based on survey-based yields. These results suggest that high-resolution satellite imagery can be used to make predictions of smallholder agricultural productivity that are roughly as accurate as the survey-based measures traditionally used in research and policy applications, and they indicate a substantial near-term potential to quickly generate useful datasets on productivity in smallholder systems, even with minimal or no field training data. Such datasets could rapidly accelerate learning about which interventions in smallholder systems have the most positive impact, thus enabling more rapid transformation of rural livelihoods.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Moore, K L; Mrode, R; Coffey, M P
2017-10-01
Visual Image analysis (VIA) of carcass traits provides the opportunity to estimate carcass primal cut yields on large numbers of slaughter animals. This allows carcases to be better differentiated and farmers to be paid based on the primal cut yields. It also creates more accurate genetic selection due to high volumes of data which enables breeders to breed cattle that better meet the abattoir specifications and market requirements. In order to implement genetic evaluations for VIA primal cut yields, genetic parameters must first be estimated and that was the aim of this study. Slaughter records from the UK prime slaughter population for VIA carcass traits was available from two processing plants. After edits, there were 17 765 VIA carcass records for six primal cut traits, carcass weight as well as the EUROP conformation and fat class grades. Heritability estimates after traits were adjusted for age ranged from 0.32 (0.03) for EUROP fat to 0.46 (0.03) for VIA Topside primal cut yield. Adjusting the VIA primal cut yields for carcass weight reduced the heritability estimates, with estimates of primal cut yields ranging from 0.23 (0.03) for Fillet to 0.29 (0.03) for Knuckle. Genetic correlations between VIA primal cut yields adjusted for carcass weight were very strong, ranging from 0.40 (0.06) between Fillet and Striploin to 0.92 (0.02) between Topside and Silverside. EUROP conformation was also positively correlated with the VIA primal cuts with genetic correlation estimates ranging from 0.59 to 0.84, whereas EUROP fat was estimated to have moderate negative correlations with primal cut yields, estimates ranged from -0.11 to -0.46. Based on these genetic parameter estimates, genetic evaluation of VIA primal cut yields can be undertaken to allow the UK beef industry to select carcases that better meet abattoir specification and market requirements.
Chandran, S; Parker, F; Lontos, S; Vaughan, R; Efthymiou, M
2015-12-01
Polyps identified at colonoscopy are predominantly diminutive (<5 mm) with a small risk (>1%) of high-grade dysplasia or carcinoma; however, the cost of histological assessment is substantial. The aim of this study was to determine whether prediction of colonoscopy surveillance intervals based on real-time endoscopic assessment of polyp histology is accurate and cost effective. A prospective cohort study was conducted across a tertiary care and private community hospital. Ninety-four patients underwent colonoscopy and polypectomy of diminutive (≤5 mm) polyps from October 2012 to July 2013, yielding a total of 159 polyps. Polyps were examined and classified according to the Sano-Emura classification system. The endoscopic assessment (optical diagnosis) of polyp histology was used to predict appropriate colonoscopy surveillance intervals. The main outcome measure was the accuracy of optical diagnosis of diminutive colonic polyps against the gold standard of histological assessment. Optical diagnosis was correct in 105/108 (97.2%) adenomas. This yielded a sensitivity, specificity and positive and negative predictive values (with 95%CI) of 97.2% (92.1-99.4%), 78.4% (64.7-88.7%), 90.5% (83.7-95.2%) and 93% (80.9-98.5%) respectively. Ninety-two (98%) patients were correctly triaged to their repeat surveillance colonoscopy. Based on these findings, a cut and discard approach would have resulted in a saving of $319.77 per patient. Endoscopists within a tertiary care setting can accurately predict diminutive polyp histology and confer an appropriate surveillance interval with an associated financial benefit to the healthcare system. However, limitations to its application in the community setting exist, which may improve with further training and high-definition colonoscopes. © 2015 Royal Australasian College of Physicians.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerouali, K; Aubry, J; Doucet, R
2016-06-15
Purpose: To implement the new EBT-XD Gafchromic films for accurate dosimetric and geometric validation of stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) CyberKnife (CK) patient specific QA. Methods: Film calibration was performed using a triplechannel film analysis on an Epson 10000XL scanner. Calibration films were irradiated using a Varian Clinac 21EX flattened beam (0 to 20 Gy), to ensure sufficient dose homogeneity. Films were scanned to a resolution of 0.3 mm, 24 hours post irradiation following a well-defined protocol. A set of 12 QA was performed for several types of CK plans: trigeminal neuralgia, brain metastasis, prostate andmore » lung tumors. A custom made insert for the CK head phantom has been manufactured to yield an accurate measured to calculated dose registration. When the high dose region was large enough, absolute dose was also measured with an ionization chamber. Dose calculation is performed using MultiPlan Ray-tracing algorithm for all cases since the phantom is mostly made from near water-equivalent plastic. Results: Good agreement (<2%) was found between the dose to the chamber and the film, when a chamber measurement was possible The average dose difference and standard deviations between film measurements and TPS calculations were respectively 1.75% and 3%. The geometric accuracy has been estimated to be <1 mm, combining robot positioning uncertainty and film registration to calculated dose. Conclusion: Patient specific QA measurements using EBT-XD films yielded a full 2D dose plane with high spatial resolution and acceptable dose accuracy. This method is particularly promising for trigeminal neuralgia plan QA, where the positioning of the spatial dose distribution is equally or more important than the absolute delivered dose to achieve clinical goals.« less
Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth
2009-02-01
The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.
Hirschmann, J; Schoffelen, J M; Schnitzler, A; van Gerven, M A J
2017-10-01
To investigate the possibility of tremor detection based on deep brain activity. We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10 PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual power features. Applying a threshold directly to band-limited power was insufficient for tremor detection (mean area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained from a single contact pair. Within-patient training yielded better accuracy than across-patient training (0.84vs. 0.78, p=0.03), yet tremor could often be detected accurately with either approach. High frequency oscillations (>200Hz) were the best performing individual feature. LFP-based markers of tremor are robust enough to allow for accurate tremor detection in short data segments, provided that appropriate statistical models are used. LFP-based markers of tremor could be useful control signals for closed-loop deep brain stimulation. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. LACIE acreage estimates were in close agreement with SRS estimates, and an operational system with a 14 day LANDSAT data turnaround could have produced an accurate acreage estimate (one which satisfied the 90/90 criterion) 1 1/2 to 2 months before harvest. Low yield estimates, resulting from agromet conditions not taken into account in the yield models, caused production estimates to be correspondingly low. However, both yield and production estimates satisfied the LACIE 90/90 criterion for winter wheat in the yardstick region.
Staged Z-pinch Experiments at the 1MA Zebra pulsed-power generator: Neutron measurements
NASA Astrophysics Data System (ADS)
Ruskov, Emil; Darling, T.; Glebov, V.; Wessel, F. J.; Anderson, A.; Beg, F.; Conti, F.; Covington, A.; Dutra, E.; Narkis, J.; Rahman, H.; Ross, M.; Valenzuela, J.
2017-10-01
We report on neutron measurements from the latest Staged Z-pinch experiments at the 1MA Zebra pulsed-power generator. In these experiments a hollow shell of argon or krypton gas liner, injected between the 1 cm anode-cathode gap, compresses a deuterium plasma target of varying density. Axial magnetic field Bz <= 2 kGs, applied throughout the pinch region, stabilizes the Rayleigh-Taylor instability. The standard silver activation diagnostics and 4 plastic scintillator neutron Time of Flight (nTOF) detectors are augmented with a large area ( 1400 cm2) liquid scintillator detector to which fast gatedPhotek photomultipliers are attached. Sample data from these neutron diagnostics systems is presented. Consistently high neutron yields YDD >109 are measured, with highest yield of 2.6 ×109 . A pair of horizontally and vertically placed plastic scintillator nTOFs suggest isotropic i.e. thermonuclear origin of the neutrons produced. nTOF data from the liquid scintillator detector was cross-calibrated with the silver activation detector, and can be used for accurate calculation of the neutron yield. Funded by the Advanced Research Projects Agency - Energy, under Grant Number DE-AR0000569.
Battaglia, A; Bianchini, E; Carey, J C
1999-01-01
The Consensus Conference of the American College of Medical Genetics has established guidelines regarding the evaluation of patients with mental retardation (MR) [Curry et al., Am. J. Med. Genet. 72:468-477, 1997]. They emphasized the high diagnostic utility of cytogenetic studies and of neuroimaging in certain clinical settings. However, data on the diagnostic yield of these studies in well-characterized populations of individuals with MR are scant. Majnemer and Shevell [J. Pediatr. 127:193-199, 1995] attained a diagnostic yield of 63%. However, this study included only 60 patients and the classification included pathogenetic and causal groups. The Stella Maris Institute has evaluated systematically patients with developmental delay (DD)/MR and performed various laboratory studies and neuroimaging in almost all patients. We report a retrospective analysis of the diagnostic yield of 120 consecutive patients observed at our Institute during the first 6 months of 1996. There were 77 males and 43 females; 47 were mildly delayed (IQ 70-50), 31 were moderately delayed (IQ 50-35), and 42 were severely delayed (IQ 35-20). Diagnostic studies (history, physical examination, standard cytogenetics, fragile X testing, molecular studies, electroencephalography, electromyography, nerve conduction studies, neuroimaging, and metabolic screening tests) yielded a causal diagnosis in 50 (41.6%) and a pathogenetic diagnosis in 47 (39.2%) of the 120 patients. Causal categories included chromosomal abnormalities (14), Fra(X) syndromes (4), known MCA/MR syndromes (19), fetal environmental syndromes (1), neurometabolic (3) disorders, neurocutaneous (3) disorders, hypoxic-ischemic encephalopathy (3), other encephalopathies (1), and congenital bilateral perisylvian syndrome (2). Pathogenetic categories included idiopathic MCA/MR syndromes (35), epileptic syndromes (10), and isolated lissencephaly sequence (2). Diagnostic yield did not differ across categories and degree of DD. Our results, while confirming the diagnostic utility of cytogenetic/molecular genetic, and neuroimaging studies, suggest the usefulness of accurate electroencephalogram recordings, and stress the importance of a thorough physical examination. Referral to a university child neurology and psychiatry service, where a comprehensive assessment with a selected battery of investigations is possible, yields etiologic findings in a high percentage of DD/MR patients, with important implications for management, prognosis and recurrence risk estimate.
Plastic flow modeling in glassy polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clements, Brad
2010-12-13
Glassy amorphous and semi-crystalline polymers exhibit strong rate, temperature, and pressure dependent polymeric yield. As a rule of thumb, in uniaxial compression experiments the yield stress increases with the loading rate and applied pressure, and decreases as the temperature increases. Moreover, by varying the loading state itself complex yield behavior can be observed. One example that illustrates this complexity is that most polymers in their glassy regimes (i.e., when the temperature is below their characteristic glass transition temperature) exhibit very pronounced yield in their uniaxial stress stress-strain response but very nebulous yield in their uniaxial strain response. In uniaxial compression,more » a prototypical glassy-polymer stress-strain curve has a stress plateau, often followed by softening, and upon further straining, a hardening response. Uniaxial compression experiments of this type are typically done from rates of 10{sup -5} s{sup -1} up to about 1 s{sup -1}. At still higher rates, say at several thousands per second as determined from Split Hopkinson Pressure Bar experiments, the yield can again be measured and is consistent with the above rule of thumb. One might expect that that these two sets of experiments should allow for a successful extrapolation to yet higher rates. A standard means to probe high rates (on the order of 105-107 S-I) is to use a uniaxial strain plate impact experiment. It is well known that in plate impact experiments on metals that the yield stress is manifested in a well-defined Hugoniot Elastic Limit (HEL). In contrast however, when plate impact experiments are done on glassy polymers, the HEL is arguably not observed, let alone observed at the stress estimated by extrapolating from the lower strain rate experiments. One might argue that polymer yield is still active but somehow masked by the experiment. After reviewing relevant experiments, we attempt to address this issue. We begin by first presenting our recently developed glassy polymer model. While polymers are well known for their non-equilibrium deviatoric behavior we have found the need for incorporating both equilibrium and non-equilibrium volumetric behavior into our theory. Experimental evidence supporting the notion of non-equilibrium volumetric behavior will be summarized. Our polymer yield model accurately captures the stress plateau, softening and hardening and its yield stress predictions agree well with measured values for several glassy polymers including PMMA, PC, and an epoxy resin. We then apply our theory to plate impact experiments in an attempt to address the questions associated with high rate polymer yield in uniaxial strain configurations.« less
Valero, Enrique; Adán, Antonio; Cerrada, Carlos
2012-01-01
In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369
NASA Technical Reports Server (NTRS)
Leonard, Regis F. (Editor); Bhasin, Kul B. (Editor)
1991-01-01
Consideration is given to MMICs for airborne phased arrays, monolithic GaAs integrated circuit millimeter wave imaging sensors, accurate design of multiport low-noise MMICs up to 20 GHz, an ultralinear low-noise amplifier technology for space communications, variable-gain MMIC module for space applications, a high-efficiency dual-band power amplifier for radar applications, a high-density circuit approach for low-cost MMIC circuits, coplanar SIMMWIC circuits, recent advances in monolithic phased arrays, and system-level integrated circuit development for phased-array antenna applications. Consideration is also given to performance enhancement in future communications satellites with MMIC technology insertion, application of Ka-band MMIC technology for an Orbiter/ACTS communications experiment, a space-based millimeter wave debris tracking radar, low-noise high-yield octave-band feedback amplifiers to 20 GHz, quasi-optical MESFET VCOs, and a high-dynamic-range mixer using novel balun structure.
High-temperature measurement by using a PCF-based Fabry-Perot interferometer
NASA Astrophysics Data System (ADS)
Xu, Lai-Cai; Deng, Ming; Duan, De-Wen; Wen, Wei-Ping; Han, Meng
2012-10-01
A new method for fabricating a fiber-optic Fabry-Perot interferometer (FPI) for high-temperature sensing is presented. The sensor is fabricated by fusion splicing a short section of endlessly single-mode photonic crystal fiber (ESM-PCF) to the cleaved end facet of a single-mode fiber (SMF) with an intentional complete collapse at the splice joint. This procedure not only provides easier, faster and cheaper technology for FPI sensors but also yields the FPI exhibiting an accurate and stable sinusoidal interference fringe with relatively high signal-to-noise ratio (SNR). The high-temperature response of the FPI sensors were experimentally studied and the results show that the sensor allows linear and stable measurement of temperatures up to 1100 °C with a sensitivity of ˜39.1 nm/°C for a cavity length of 1377 um, which makes it attractive for aeronautics and metallurgy areas.
NASA Astrophysics Data System (ADS)
Alrasyid, Harun; Safi, Fahrudin; Iranata, Data; Chen-Ou, Yu
2017-11-01
This research shows the prediction of shear behavior of High-Strength Reinforced Concrete Columns using Finite-Element Method. The experimental data of nine half scale high-strength reinforced concrete were selected. These columns using specified concrete compressive strength of 70 MPa, specified yield strength of longitudinal and transverse reinforcement of 685 and 785 MPa, respectively. The VecTor2 finite element software was used to simulate the shear critical behavior of these columns. The combination axial compression load and monotonic loading were applied at this prediction. It is demonstrated that VecTor2 finite element software provides accurate prediction of load-deflection up to peak at applied load, but provide similar behavior at post peak load. The shear strength prediction provide by VecTor 2 are slightly conservative compare to test result.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Determination of Earth orientation using the Global Positioning System
NASA Technical Reports Server (NTRS)
Freedman, A. P.
1989-01-01
Modern spacecraft tracking and navigation require highly accurate Earth-orientation parameters. For near-real-time applications, errors in these quantities and their extrapolated values are a significant error source. A globally distributed network of high-precision receivers observing the full Global Positioning System (GPS) configuration of 18 or more satellites may be an efficient and economical method for the rapid determination of short-term variations in Earth orientation. A covariance analysis using the JPL Orbit Analysis and Simulation Software (OASIS) was performed to evaluate the errors associated with GPS measurements of Earth orientation. These GPS measurements appear to be highly competitive with those from other techniques and can potentially yield frequent and reliable centimeter-level Earth-orientation information while simultaneously allowing the oversubscribed Deep Space Network (DSN) antennas to be used more for direct project support.
EUS-guided biopsy for the diagnosis and classification of lymphoma.
Ribeiro, Afonso; Pereira, Denise; Escalón, Maricer P; Goodman, Mark; Byrne, Gerald E
2010-04-01
EUS-guided FNA and Tru-cut biopsy (TCB) is highly accurate in the diagnosis of lymphoma. Subclassification, however, may be difficult in low-grade non-Hodgkin lymphoma and Hodgkin lymphoma. To determine the yield of EUS-guided biopsy to classify lymphoma based on the World Health Organization classification of tumors of hematopoietic lymphoid tissues. Retrospective study. Tertiary referral center. A total of 24 patients referred for EUS-guided biopsy who had a final diagnosis of lymphoma or "highly suspicious for lymphoma." EUS-guided FNA and TCB combined with flow cytometry (FC) analysis. MAIN OUTCOMES MEASUREMENT: Lymphoma subclassification accuracy of EUS guided biopsy. Twenty-four patients were included in this study. Twenty-three patients underwent EUS-FNA, and 1 patient had only TCB. Twenty-two underwent EUS-TCB combined with FNA. EUS correctly diagnosed lymphoma in 19 out of 24 patients (79%), and subclassification was determined in 16 patients (66.6%). Flow cytometry correctly identified B-cell monoclonality in 95% (18 out of 19). In 1 patient diagnosed as having marginal-zone lymphoma by EUS-FNA/FC only, the diagnosis was changed to hairy cell leukemia after a bone marrow biopsy was obtained. EUS had a lower yield in nonlarge B-cell lymphoma (only 9 out of 15 cases [60%]) compared with large B-cell lymphoma (78%; P = .3 [Fisher exact test]). Retrospective, small number of patients. EUS-guided biopsy has a lower yield to correctly classify Hodgkin lymphoma and low-grade lymphoma compared with high-grade diffuse large B-cell lymphoma. Copyright 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Tracy, Saoirse R; Gómez, José Fernández; Sturrock, Craig J; Wilson, Zoe A; Ferguson, Alison C
2017-01-01
Accurate floral staging is required to aid research into pollen and flower development, in particular male development. Pollen development is highly sensitive to stress and is critical for crop yields. Research into male development under environmental change is important to help target increased yields. This is hindered in monocots as the flower develops internally in the pseudostem. Floral staging studies therefore typically rely on destructive analysis, such as removal from the plant, fixation, staining and sectioning. This time-consuming analysis therefore prevents follow up studies and analysis past the point of the floral staging. This study focuses on using X-ray µCT scanning to allow quick and detailed non-destructive internal 3D phenotypic information to allow accurate staging of Arabidopsis thaliana L. and Barley ( Hordeum vulgare L.) flowers. X-ray µCT has previously relied on fixation methods for above ground tissue, therefore two contrast agents (Lugol's iodine and Bismuth) were observed in Arabidopsis and Barley in planta to circumvent this step. 3D models and 2D slices were generated from the X-ray µCT images providing insightful information normally only available through destructive time-consuming processes such as sectioning and microscopy. Barley growth and development was also monitored over three weeks by X-ray µCT to observe flower development in situ. By measuring spike size in the developing tillers accurate non-destructive staging at the flower and anther stages could be performed; this staging was confirmed using traditional destructive microscopic analysis. The use of X-ray micro computed tomography (µCT) scanning of living plant tissue offers immense benefits for plant phenotyping, for successive developmental measurements and for accurate developmental timing for scientific measurements. Nevertheless, X-ray µCT remains underused in plant sciences, especially in above-ground organs, despite its unique potential in delivering detailed non-destructive internal 3D phenotypic information. This work represents a novel application of X-ray µCT that could enhance research undertaken in monocot species to enable effective non-destructive staging and developmental analysis for molecular genetic studies and to determine effects of stresses at particular growth stages.
Factors related to well yield in the fractured-bedrock aquifer of New Hampshire
Moore, Richard Bridge; Schwartz, Gregory E.; Clark, Stewart F.; Walsh, Gregory J.; Degnan, James R.
2002-01-01
The New Hampshire Bedrock Aquifer Assessment was designed to provide information that can be used by communities, industry, professional consultants, and other interests to evaluate the ground-water development potential of the fractured-bedrock aquifer in the State. The assessment was done at statewide, regional, and well field scales to identify relations that potentially could increase the success in locating high-yield water supplies in the fractured-bedrock aquifer. statewide, data were collected for well construction and yield information, bedrock lithology, surficial geology, lineaments, topography, and various derivatives of these basic data sets. Regionally, geologic, fracture, and lineament data were collected for the Pinardville and Windham quadrangles in New Hampshire. The regional scale of the study examined the degree to which predictive well-yield relations, developed as part of the statewide reconnaissance investigation, could be improved by use of quadrangle-scale geologic mapping. Beginning in 1984, water-well contractors in the State were required to report detailed information on newly constructed wells to the New Hampshire Department of Environmental Services (NHDES). The reports contain basic data on well construction, including six characteristics used in this study?well yield, well depth, well use, method of construction, date drilled, and depth to bedrock (or length of casing). The NHDES has determined accurate georeferenced locations for more than 20,000 wells reported since 1984. The availability of this large data set provided an opportunity for a statistical analysis of bedrock-well yields. Well yields in the database ranged from zero to greater than 500 gallons per minute (gal/min). Multivariate regression was used as the primary statistical method of analysis because it is the most efficient tool for predicting a single variable with many potentially independent variables. The dependent variable that was explored in this study was the natural logarithm (ln) of the reported well yield. One complication with using well yield as a dependent variable is that yield also is a function of demand. An innovative statistical technique that involves the use of instrumental variables was implemented to compensate for the effect of demand on well yield. Results of the multivariate-regression model show that a variety of factors are either positively or negatively related to well yields. Using instrumental variables, well depth is positively related to total well yield. Other factors that were found to be positively related to well yield include (1) distance to the nearest waterbody; (2) size of the drainage area upgradient of a well; (3) well location in swales or valley bottoms in the Massabesic Gneiss Complex and Breakfast Hill Granite; (4) well proximity to lineaments, identified using high-altitude (1:80,000-scale) aerial photography, which are correlated with the primary fracture direction (regional analysis); (5) use of a cable tool rig for well drilling; and (6) wells drilled for commercial or public supply. Factors negatively related to well yields include sites underlain by foliated plutons, sites on steep slopes sites at high elevations, and sites on hilltops. Additionally, seven detailed geologic map units, identified during the detailed geologic mapping of the Pinardville and Windham quadrangles, were found to be positively or negatively related to well yields. Twenty-four geologic map units, depicted on the Bedrock Geologic Map of New Hampshire, also were found to be positively or negatively related to well yields. Maps or geographic information system (GIS) data sets identifying areas of various yield probabilities clearly display model results. Probability criteria developed in this investigation can be used to select areas where other techniques, such as geophysical techniques, can be applied to more closely identify potential drilling sites for high-yielding
Hansmann, Jan; Evers, Maximilian J; Bui, James T; Lokken, R Peter; Lipnik, Andrew J; Gaba, Ron C; Ray, Charles E
2017-09-01
To evaluate albumin-bilirubin (ALBI) and platelet-albumin-bilirubin (PALBI) grades in predicting overall survival in high-risk patients undergoing conventional transarterial chemoembolization for hepatocellular carcinoma (HCC). This single-center retrospective study included 180 high-risk patients (142 men, 59 y ± 9) between April 2007 and January 2015. Patients were considered high-risk based on laboratory abnormalities before the procedure (bilirubin > 2.0 mg/dL, albumin < 3.5 mg/dL, platelet count < 60,000/mL, creatinine > 1.2 mg/dL); presence of ascites, encephalopathy, portal vein thrombus, or transjugular intrahepatic portosystemic shunt; or Model for End-Stage Liver Disease score > 15. Serum albumin, bilirubin, and platelet values were used to determine ALBI and PALBI grades. Overall survival was stratified by ALBI and PALBI grades with substratification by Child-Pugh class (CPC) and Barcelona Liver Clinic Cancer (BCLC) stage using Kaplan-Meier analysis. C-index was used to determine discriminatory ability and survival prediction accuracy. Median survival for 79 ALBI grade 2 patients and 101 ALBI grade 3 patients was 20.3 and 10.7 months, respectively (P < .0001). Median survival for 30 PALBI grade 2 and 144 PALBI grade 3 patients was 20.3 and 12.9 months, respectively (P = .0667). Substratification yielded distinct ALBI grade survival curves for CPC B (P = .0022, C-index 0.892), BCLC A (P = .0308, C-index 0.887), and BCLC C (P = .0287, C-index 0.839). PALBI grade demonstrated distinct survival curves for BCLC A (P = 0.0229, C-index 0.869). CPC yielded distinct survival curves for the entire cohort (P = .0019) but not when substratified by BCLC stage (all P > .05). ALBI and PALBI grades are accurate survival metrics in high-risk patients undergoing conventional transarterial chemoembolization for HCC. Use of these scores allows for more refined survival stratification within CPC and BCLC stage. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set
Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.
1996-01-01
This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Shen, Zhitao; Ma, Haitao; Zhang, Chunfang; Fu, Mingkai; Wu, Yanan; Bian, Wensheng; Cao, Jianwei
2017-01-01
Encouraged by recent advances in revealing significant effects of van der Waals wells on reaction dynamics, many people assume that van der Waals wells are inevitable in chemical reactions. Here we find that the weak long-range forces cause van der Waals saddles in the prototypical C(1D)+D2 complex-forming reaction that have very different dynamical effects from van der Waals wells at low collision energies. Accurate quantum dynamics calculations on our highly accurate ab initio potential energy surfaces with van der Waals saddles yield cross-sections in close agreement with crossed-beam experiments, whereas the same calculations on an earlier surface with van der Waals wells produce much smaller cross-sections at low energies. Further trajectory calculations reveal that the van der Waals saddle leads to a torsion then sideways insertion reaction mechanism, whereas the well suppresses reactivity. Quantum diffraction oscillations and sharp resonances are also predicted based on our ground- and excited-state potential energy surfaces. PMID:28094253
Modeling evaporation from spent nuclear fuel storage pools: A diffusion approach
NASA Astrophysics Data System (ADS)
Hugo, Bruce Robert
Accurate prediction of evaporative losses from light water reactor nuclear power plant (NPP) spent fuel storage pools (SFPs) is important for activities ranging from sizing of water makeup systems during NPP design to predicting the time available to supply emergency makeup water following severe accidents. Existing correlations for predicting evaporation from water surfaces are only optimized for conditions typical of swimming pools. This new approach modeling evaporation as a diffusion process has yielded an evaporation rate model that provided a better fit of published high temperature evaporation data and measurements from two SFPs than other published evaporation correlations. Insights from treating evaporation as a diffusion process include correcting for the effects of air flow and solutes on evaporation rate. An accurate modeling of the effects of air flow on evaporation rate is required to explain the observed temperature data from the Fukushima Daiichi Unit 4 SFP during the 2011 loss of cooling event; the diffusion model of evaporation provides a significantly better fit to this data than existing evaporation models.
3D multiscale crack propagation using the XFEM applied to a gas turbine blade
NASA Astrophysics Data System (ADS)
Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund
2014-01-01
This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.
Refined Zigzag Theory for Laminated Composite and Sandwich Plates
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco
2009-01-01
A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.
Spectral estimates of intercepted solar radiation by corn and soybean canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.
1982-01-01
Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
Calculation of K-shell fluorescence yields for low-Z elements
NASA Astrophysics Data System (ADS)
Nekkab, M.; Kahoul, A.; Deghfel, B.; Aylikci, N. Küp; Aylikçi, V.
2015-03-01
The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ωK) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ωK) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ωk/(1 -ωk)) 1 /q (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the results of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.
Ren, Jianqiang; Chen, Zhongxin; Tang, Huajun
2006-12-01
Taking Jining City of Shandong Province, one of the most important winter wheat production regions in Huanghuaihai Plain as an example, the winter wheat yield was estimated by using the 250 m MODIS-NDVI data smoothed by Savitzky-Golay filter. The NDVI values between 0. 20 and 0. 80 were selected, and the sum of NDVI value for each county was calculated to build its relation with winter wheat yield. By using stepwise regression method, the linear regression model between NDVI and winter wheat yield was established, with the precision validated by the ground survey data. The results showed that the relative error of predicted yield was between -3.6% and 3.9%, suggesting that the method was relatively accurate and feasible.
Robert-Peillard, Fabien; Boudenne, Jean-Luc; Coulomb, Bruno
2014-05-01
This paper presents a simple, accurate and multi-sample method for the determination of proline in wines thanks to a 96-well microplate technique. Proline is the most abundant amino acid in wine and is an important parameter related to wine characteristics or maturation processes of grape. In the current study, an improved application of the general method based on sodium hypochlorite oxidation and o-phthaldialdehyde (OPA)-thiol spectrofluorometric detection is described. The main interfering compounds for specific proline detection in wines are strongly reduced by selective reaction with OPA in a preliminary step under well-defined pH conditions. Application of the protocol after a 500-fold dilution of wine samples provides a working range between 0.02 and 2.90gL(-1), with a limit of detection of 7.50mgL(-1). Comparison and validation on real wine samples by ion-exchange chromatography prove that this procedure yields accurate results. Simplicity of the protocol used, with no need for centrifugation or filtration, organic solvents or high temperature enables its full implementation in plastic microplates and efficient application for routine analysis of proline in wines. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salumbides, E. J.; Khramov, A.; Wolf, A. L.
2008-11-28
Two distinct high-accuracy laboratory spectroscopic investigations of the H{sub 2} molecule are reported. Anchor lines in the EF{sup 1}{sigma}{sub g}{sup +}-X{sup 1}{sigma}{sub g}{sup +} system are calibrated by two-photon deep-UV Doppler-free spectroscopy, while independent Fourier-transform spectroscopic measurements are performed that yield accurate spacings in the B{sup 1}{sigma}{sub u}{sup +}-EF{sup 1}{sigma}{sub g}{sup +} and I{sup 1}{pi}{sub g}-C{sup 1}{pi}{sub u} systems. From combination differences accurate transition wavelengths for the B-X Lyman and the C-X Werner lines can be determined with accuracies better than {approx}5x10{sup -9}, representing a major improvement over existing values. This metrology provides a practically exact database to extract amore » possible variation of the proton-to-electron mass ratio based on H{sub 2} lines in high-redshift objects. Moreover, it forms a rationale for equipping a future class of telescopes, carrying 30-40 m dishes, with novel spectrometers of higher resolving powers.« less
EBIC: an evolutionary-based parallel biclustering algorithm for pattern discovery.
Orzechowski, Patryk; Sipper, Moshe; Huang, Xiuzhen; Moore, Jason H
2018-05-22
Biclustering algorithms are commonly used for gene expression data analysis. However, accurate identification of meaningful structures is very challenging and state-of-the-art methods are incapable of discovering with high accuracy different patterns of high biological relevance. In this paper a novel biclustering algorithm based on evolutionary computation, a subfield of artificial intelligence (AI), is introduced. The method called EBIC aims to detect order-preserving patterns in complex data. EBIC is capable of discovering multiple complex patterns with unprecedented accuracy in real gene expression datasets. It is also one of the very few biclustering methods designed for parallel environments with multiple graphics processing units (GPUs). We demonstrate that EBIC greatly outperforms state-of-the-art biclustering methods, in terms of recovery and relevance, on both synthetic and genetic datasets. EBIC also yields results over 12 times faster than the most accurate reference algorithms. EBIC source code is available on GitHub at https://github.com/EpistasisLab/ebic. Correspondence and requests for materials should be addressed to P.O. (email: patryk.orzechowski@gmail.com) and J.H.M. (email: jhmoore@upenn.edu). Supplementary Data with results of analyses and additional information on the method is available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Palodiya, Vikram; Raghuwanshi, Sanjeev Kumar
2017-12-01
In this paper, the domain inversion is used in a simple fashion to improve the performance of a Z-cut highly integrated LiNbO3 optical modulator (LNOM). The Z-cut modulator having ≤ 3 V switching voltage and bandwidth of 15 GHz for an external modulator in which traveling-wave electrode length L_{m} imposed the modulating voltage, the product of V_π and L_{m} is fixed for a given electro-optic material (EOM). An investigation to achieve a low V_π by both magnitude of the electro-optic coefficient (EOC) for a wide variety of EOMs has been reported. The Sellmeier equation (SE) for the extraordinary index of congruent LiNbO3 is derived. The predictions related to phase matching are accurate between room temperature and 250 °C and wavelength ranging from 0.4 to 5 μm. The SE predicts more accurate refractive indices (RI) at long wavelengths. The different overlaps between the waveguides for the Z-cut structure are shown to yield a chirp parameter that can able to adjust 0-0.7. Theoretical results are perfectly verified by simulated results.
Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi
2016-01-01
It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot’s end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed. PMID:27483274
How Accurately Can We Map SEP Observations Using L*?
NASA Astrophysics Data System (ADS)
Young, S. L.; Kress, B. T.
2016-12-01
In a dipole the cutoff rigidities at a given location are inversely proportional to L2. Smart and Shea, 1967 showed that this was approximately true at low altitudes using the McIlwain L parameter (Lm) in realistic magnetospheric models and provided heuristic evidence that it was also true at high altitudes. Later models developed by Smart and Shea and others (Ogliore et al., 2001, Neal et al., 2013, Selesnick et al., 2015) also use this relationship at low altitudes. Only the Smart and Shea model (Smart and Shea, 2006) uses this relationship to extrapolate to high altitudes, but they introduce a correction that yields a 1 MeV proton vertical cutoff at geosynchronous. Recent work mapped POES observations to the Van Allen Probes locations as a function of L* (Young et al., 2015). The comparison between mapped and observed was reasonably good, but this mapping was along L* and only attempted to account for differences in shielding between high and low latitude. No attempt was made to map across L* so the inverse squared relationship was not tested. These previous results suggest that L* may be useful for mapping flux observations between satellites at high altitudes. In this study we calculate cutoffs and L* shells in a Tsyganenko 2005 + IGRF magnetic field model to examine how accurately L* based mapping can be used in different regions of the magnetosphere.
de Souza Figueiredo, Fabiana; Celano, Rita; de Sousa Silva, Danila; das Neves Costa, Fernanda; Hewitson, Peter; Ignatova, Svetlana; Piccinelli, Anna Lisa; Rastrelli, Luca; Guimarães Leitão, Suzana; Guimarães Leitão, Gilda
2017-01-20
Ampelozizyphus amazonicus Ducke (Rhamnaceae), a medicinal plant used to prevent malaria, is a climbing shrub, native to the Amazonian region, with jujubogenin glycoside saponins as main compounds. The crude extract of this plant is too complex for any kind of structural identification, and HPLC separation was not sufficient to resolve this issue. Therefore, the aim of this work was to obtain saponin enriched fractions from the bark ethanol extract by countercurrent chromatography (CCC) for further isolation and identification/characterisation of the major saponins by HPLC and MS. The butanol extract was fractionated by CCC with hexane - ethyl acetate - butanol - ethanol - water (1:6:1:1:6; v/v) solvent system yielding 4 group fractions. The collected fractions were analysed by UHPLC-HRMS (ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry) and MS n . Group 1 presented mainly oleane type saponins, and group 3 showed mainly jujubogenin glycosides, keto-dammarane type triterpene saponins and saponins with C 31 skeleton. Thus, CCC separated saponins from the butanol-rich extract by skeleton type. A further purification of group 3 by CCC (ethyl acetate - ethanol - water (1:0.2:1; v/v)) and HPLC-RI was performed in order to obtain these unusual aglycones in pure form. Copyright © 2016 Elsevier B.V. All rights reserved.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo
2018-06-01
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
Recent Developments of an Opto-Electronic THz Spectrometer for High-Resolution Spectroscopy
Hindle, Francis; Yang, Chun; Mouret, Gael; Cuisset, Arnaud; Bocquet, Robin; Lampin, Jean-François; Blary, Karine; Peytavit, Emilien; Akalin, Tahsin; Ducournau, Guillaume
2009-01-01
A review is provided of sources and detectors that can be employed in the THz range before the description of an opto-electronic source of monochromatic THz radiation. The realized spectrometer has been applied to gas phase spectroscopy. Air-broadening coefficients of HCN are determined and the insensitivity of this technique to aerosols is demonstrated by the analysis of cigarette smoke. A multiple pass sample cell has been used to obtain a sensitivity improvement allowing transitions of the volatile organic compounds to be observed. A solution to the frequency metrology is presented and promises to yield accurate molecular line center measurements. PMID:22291552
Scanner focus metrology and control system for advanced 10nm logic node
NASA Astrophysics Data System (ADS)
Oh, Junghun; Maeng, Kwang-Seok; Shin, Jae-Hyung; Choi, Won-Woong; Won, Sung-Keun; Grouwstra, Cedric; El Kodadi, Mohamed; Heil, Stephan; van der Meijden, Vidar; Hong, Jong Kyun; Kim, Sang-Jin; Kwon, Oh-Sung
2018-03-01
Immersion lithography is being extended beyond the 10-nm node and the lithography performance requirement needs to be tightened further to ensure good yield. Amongst others, good on-product focus control with accurate and dense metrology measurements is essential to enable this. In this paper, we will present new solutions that enable onproduct focus monitoring and control (mean and uniformity) suitable for high volume manufacturing environment. We will introduce the concept of pure focus and its role in focus control through the imaging optimizer scanner correction interface. The results will show that the focus uniformity can be improved by up to 25%.
Frequency domain laser velocimeter signal processor
NASA Technical Reports Server (NTRS)
Meyers, James F.; Murphy, R. Jay
1991-01-01
A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a signal processor capable of operating in the frequency domain maximizing the information obtainable from each signal burst. This allows a sophisticated approach to signal detection and processing, with a more accurate measurement of the chirp frequency resulting in an eight-fold increase in measurable signals over the present high-speed burst counter technology. Further, the required signal-to-noise ratio is reduced by a factor of 32, allowing measurements within boundary layers of wind tunnel models. Measurement accuracy is also increased up to a factor of five.
Application of the superposition principle to solar-cell analysis
NASA Technical Reports Server (NTRS)
Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.
1979-01-01
The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).
Leaf age and methodology impact assessments of thermotolerance of Coffea arabica
Danielle E. Marias; Frederick C. Meinzer; Christopher Still
2017-01-01
Key message Mature Coffea arabica leaves were more heat tolerant than expanding leaves, longer recovery time yielded more accurate thermotolerance assessments, and photochemistry was more heat sensitive than cell membranes. Abstract Given...
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E
2017-06-01
The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.
Determination of Littlest Higgs Model Parameters at the ILC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, John A.; Hewett, JoAnne; Le, My Phuong
2005-07-27
We examine the effects of the extended gauge sector of the Littlest Higgs model in high energy e{sup +}e{sup -} collisions. We find that the search reach in e{sup +}e{sup -} {yields} f{bar f} at a {radical}s = 500 GeV International Linear Collider covers essentially the entire parameter region where the Littlest Higgs model is relevant to the gauge hierarchy problem. In addition, we show that this channel provides an accurate determination of the fundamental model parameters, to the precision of a few percent, provided that the LHC measures the mass of the heavy neutral gauge .eld. Additionally, we showmore » that the couplings of the extra gauge bosons to the light Higgs can be observed from the process e{sup +}e{sup -} {yields} Zh for a significant region of the parameter space. This allows for confirmation of the structure of the cancellation of the Higgs mass quadratic divergence and would verify the little Higgs mechanism.« less
NASA Technical Reports Server (NTRS)
Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.
1988-01-01
Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju
2015-11-15
We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less
Chan, Chung-Hung; Yusoff, Rozita; Ngoh, Gek-Cheng
2013-09-01
A modeling technique based on absorbed microwave energy was proposed to model microwave-assisted extraction (MAE) of antioxidant compounds from cocoa (Theobroma cacao L.) leaves. By adapting suitable extraction model at the basis of microwave energy absorbed during extraction, the model can be developed to predict extraction profile of MAE at various microwave irradiation power (100-600 W) and solvent loading (100-300 ml). Verification with experimental data confirmed that the prediction was accurate in capturing the extraction profile of MAE (R-square value greater than 0.87). Besides, the predicted yields from the model showed good agreement with the experimental results with less than 10% deviation observed. Furthermore, suitable extraction times to ensure high extraction yield at various MAE conditions can be estimated based on absorbed microwave energy. The estimation is feasible as more than 85% of active compounds can be extracted when compared with the conventional extraction technique. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling residence-time distribution in horizontal screw hydrolysis reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sievers, David A.; Stickel, Jonathan J.
The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less
A trans-phase granular continuum relation and its use in simulation
NASA Astrophysics Data System (ADS)
Kamrin, Ken; Dunatunga, Sachith; Askari, Hesam
The ability to model a large granular system as a continuum would offer tremendous benefits in computation time compared to discrete particle methods. However, two infamous problems arise in the pursuit of this vision: (i) the constitutive relation for granular materials is still unclear and hotly debated, and (ii) a model and corresponding numerical method must wear ``many hats'' as, in general circumstances, it must be able to capture and accurately represent the material as it crosses through its collisional, dense-flowing, and solid-like states. Here we present a minimal trans-phase model, merging an elastic response beneath a fictional yield criterion, a mu(I) rheology for liquid-like flow above the static yield criterion, and a disconnection rule to model separation of the grains into a low-temperature gas. We simulate our model with a meshless method (in high strain/mixing cases) and the finite-element method. It is able to match experimental data in many geometries, including collapsing columns, impact on granular beds, draining silos, and granular drag problems.
Modeling residence-time distribution in horizontal screw hydrolysis reactors
Sievers, David A.; Stickel, Jonathan J.
2017-10-12
The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less
Bagdonaite, J; Salumbides, E J; Preval, S P; Barstow, M A; Barrow, J D; Murphy, M T; Ubachs, W
2014-09-19
Spectra of molecular hydrogen (H2) are employed to search for a possible proton-to-electron mass ratio (μ) dependence on gravity. The Lyman transitions of H2, observed with the Hubble Space Telescope towards white dwarf stars that underwent a gravitational collapse, are compared to accurate laboratory spectra taking into account the high temperature conditions (T∼13 000 K) of their photospheres. We derive sensitivity coefficients Ki which define how the individual H2 transitions shift due to μ dependence. The spectrum of white dwarf star GD133 yields a Δμ/μ constraint of (-2.7±4.7stat±0.2syst)×10(-5) for a local environment of a gravitational potential ϕ∼10(4) ϕEarth, while that of G29-38 yields Δμ/μ=(-5.8±3.8stat±0.3syst)×10(-5) for a potential of 2×10(4) ϕEarth.
Model-independent determination of the astrophysical S factor in laser-induced fusion plasmas
NASA Astrophysics Data System (ADS)
Lattuada, D.; Barbarino, M.; Bonasera, A.; Bang, W.; Quevedo, H. J.; Warren, M.; Consoli, F.; De Angelis, R.; Andreoli, P.; Kimura, S.; Dyer, G.; Bernstein, A. C.; Hagel, K.; Barbui, M.; Schmidt, K.; Gaul, E.; Donovan, M. E.; Natowitz, J. B.; Ditmire, T.
2016-04-01
In this work, we present a new and general method for measuring the astrophysical S factor of nuclear reactions in laser-induced plasmas and we apply it to :mmultiscripts>(d ,n )3He . The experiment was performed with the Texas Petawatt Laser, which delivered 150-270 fs pulses of energy ranging from 90 to 180 J to D2 or CD4 molecular clusters (where D denotes 2H ) . After removing the background noise, we used the measured time-of-flight data of energetic deuterium ions to obtain their energy distribution. We derive the S factor using the measured energy distribution of the ions, the measured volume of the fusion plasma, and the measured fusion yields. This method is model independent in the sense that no assumption on the state of the system is required, but it requires an accurate measurement of the ion energy distribution, especially at high energies, and of the relevant fusion yields. In the :mmultiscripts>(d ,n )3He and 3He(d ,p )4He cases discussed here, it is very important to apply the background subtraction for the energetic ions and to measure the fusion yields with high precision. While the available data on both ion distribution and fusion yields allow us to determine with good precision the S factor in the d +d case (lower Gamow energies), for the d +3He case the data are not precise enough to obtain the S factor using this method. Our results agree with other experiments within the experimental error, even though smaller values of the S factor were obtained. This might be due to the plasma environment differing from the beam target conditions in a conventional accelerator experiment.
Model-independent determination of the astrophysical S factor in laser-induced fusion plasmas
Lattuada, D.; Barbarino, M.; Bonasera, A.; ...
2016-04-19
In this paper, we present a new and general method for measuring the astrophysical S factor of nuclear reactions in laser-induced plasmas and we apply it to 2H(d,n) 3He. The experiment was performed with the Texas Petawatt Laser, which delivered 150–270 fs pulses of energy ranging from 90 to 180 J to D 2 or CD 4 molecular clusters (where D denotes 2H). After removing the background noise, we used the measured time-of-flight data of energetic deuterium ions to obtain their energy distribution. We derive the S factor using the measured energy distribution of the ions, the measured volume ofmore » the fusion plasma, and the measured fusion yields. This method is model independent in the sense that no assumption on the state of the system is required, but it requires an accurate measurement of the ion energy distribution, especially at high energies, and of the relevant fusion yields. In the 2H(d,n) 3He and 3He(d,p) 4He cases discussed here, it is very important to apply the background subtraction for the energetic ions and to measure the fusion yields with high precision. While the available data on both ion distribution and fusion yields allow us to determine with good precision the S factor in the d+d case (lower Gamow energies), for the d+ 3He case the data are not precise enough to obtain the S factor using this method. Our results agree with other experiments within the experimental error, even though smaller values of the S factor were obtained. This might be due to the plasma environment differing from the beam target conditions in a conventional accelerator experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lattuada, D.; Barbarino, M.; Bonasera, A.
In this paper, we present a new and general method for measuring the astrophysical S factor of nuclear reactions in laser-induced plasmas and we apply it to 2H(d,n) 3He. The experiment was performed with the Texas Petawatt Laser, which delivered 150–270 fs pulses of energy ranging from 90 to 180 J to D 2 or CD 4 molecular clusters (where D denotes 2H). After removing the background noise, we used the measured time-of-flight data of energetic deuterium ions to obtain their energy distribution. We derive the S factor using the measured energy distribution of the ions, the measured volume ofmore » the fusion plasma, and the measured fusion yields. This method is model independent in the sense that no assumption on the state of the system is required, but it requires an accurate measurement of the ion energy distribution, especially at high energies, and of the relevant fusion yields. In the 2H(d,n) 3He and 3He(d,p) 4He cases discussed here, it is very important to apply the background subtraction for the energetic ions and to measure the fusion yields with high precision. While the available data on both ion distribution and fusion yields allow us to determine with good precision the S factor in the d+d case (lower Gamow energies), for the d+ 3He case the data are not precise enough to obtain the S factor using this method. Our results agree with other experiments within the experimental error, even though smaller values of the S factor were obtained. This might be due to the plasma environment differing from the beam target conditions in a conventional accelerator experiment.« less
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection.
Yabiku, Scott T; Glick, Jennifer E; Wentz, Elizabeth A; Ghimire, Dirgha; Zhao, Qunshan
2017-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode.
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection*
Yabiku, Scott T.; Glick, Jennifer E.; Wentz, Elizabeth A.; Ghimire, Dirgha; Zhao, Qunshan
2018-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode. PMID:29623133
The effect of shear strength on isentropic compression experiments
NASA Astrophysics Data System (ADS)
Thomson, Stuart; Howell, Peter; Ockendon, John; Ockendon, Hilary
2015-06-01
Isentropic compression experiments (ICE) are a novel way of obtaining equation of state information for metals undergoing violent plastic deformation. In a typical experiment, millimetre thick metal samples are subjected to pressures on the order of 10 -102 GPa, while the yield strength of the material can be as low as 10-1GPa. The analysis of such experiments has so far neglected the effect of shear strength, instead treating the highly plasticised metal as an inviscid compressible fluid. However making this approximation belies the basic elastic nature of a solid object. A more accurate method should strive to incorporate the small but measurable effects of shear strength. Here we present a one-dimensional mathematical model for elastoplasticity at high stress which allows for both compressibility and the shear strength of the material. In the limit of zero yield stress this model reproduces the hydrodynamic models currently used to analyse ICEs. We will also show using a systematic asymptotic analysis that entropy changes are universally negligible in the absence of shocks. Numerical solutions of the governing equations will then be presented for problems relevant to ICEs in order to investigate the effects of shear strength over a model based purely on hydrodynamics.
Maximizing output power of a low-gain laser system.
Carroll, D L; Sentman, L H
1993-07-20
Rigrod theory was used to model outcoupled power from a low-gain laser with good accuracy. For a low-gain overtone cw HF chemical laser, Rigrod theory shows that a higher medium saturation yields a higher overall overtone efficiency, but does not necessarily yield a higher measurable power (power in the bucket). For low-absorption-scattering loss overtone mirrors and a 5% penalty in outcoupled power, the intracavity flux and hence the mirror loading may be reduced by more than a factor of 2 when the gain length is long enough to saturate the medium well. For the University of Illinois at Urbana-Champaign overtone laser that has an extensive database with well-characterized mirrors for which the Rigrod parameters g(0) and I(sat) were firmly established, the accuracy to which the reflectivities of high-reflectivity overtone mirrors can be deduced by using measured mirror transmissivities, measured outcoupled power, and Rigrod theory is approximatly ±0.07%. This method of accurately deducing mirror reflectivities may be applicable to other low-gain laser systems that use high-reflectivity mirrors at different wavelengths. The maximum overtone efficiency is estimated to be approximately 80%-100%.
Molecular processes in a high temperature shock layer
NASA Technical Reports Server (NTRS)
Guberman, S. L.
1984-01-01
Models of the shock layer encountered by an Aeroassisted Orbital Transfer Vehicle require as input accurate cross sections and rate constants for the atomic and molecular processes that characterize the shock radiation. From the estimated atomic and molecular densities in the shock layer and the expected residence time of 1 m/s, it can be expected that electron-ion collision processes will be important in the shock model. Electron capture by molecular ions followed by dissociation, e.g., O2(+) + e(-) yields 0 + 0, can be expected to be of major importance since these processes are known to have high rates (e.g., 10 to the -7th power cu/cm/sec) at room temperature. However, there have been no experimental measurements of dissociative recombination (DR) at temperatures ( 12000K) that are expected to characterize the shock layer. Indeed, even at room temperature, it is often difficult to perform experiments that determine the dependence of the translational energy and quantum yields of the product atoms on the electronic and vibrational state of the reactant molecular ions. Presented are ab initio quantum chemical studies of DR for molecular ions that are likely to be important in the atmospheric shock layer.
Jones, Phill B.; Shin, Hwa Kyoung; Boas, David A.; Hyman, Bradley T.; Moskowitz, Michael A.; Ayata, Cenk; Dunn, Andrew K.
2009-01-01
Real-time investigation of cerebral blood flow (CBF), and oxy- and deoxyhemoglobin concentration (HbO, HbR) dynamics has been difficult until recently due to limited spatial and temporal resolution of techniques like laser Doppler flowmetry and magnetic resonance imaging (MRI). The combination of laser speckle flowmetry (LSF) and multispectral reflectance imaging (MSRI) yields high-resolution spatiotemporal maps of hemodynamic and metabolic changes in response to functional cortical activation. During acute focal cerebral ischemia, changes in HbO and HbR are much larger than in functional activation, resulting in the failure of the Beer-Lambert approximation to yield accurate results. We describe the use of simultaneous LSF and MSRI, using a nonlinear Monte Carlo fitting technique, to record rapid changes in CBF, HbO, HbR, and cerebral metabolic rate of oxygen (CMRO2) during acute focal cerebral ischemia induced by distal middle cerebral artery occlusion (dMCAO) and reperfusion. This technique captures CBF and CMRO2 changes during hemodynamic and metabolic events with high temporal and spatial resolution through the intact skull and demonstrates the utility of simultaneous LSF and MSRI in mouse models of cerebrovascular disease. PMID:19021335
An Investigation into the Relationship Between Distillate Yield and Stable Isotope Fractionation
NASA Astrophysics Data System (ADS)
Sowers, T.; Wagner, A. J.
2016-12-01
Recent breakthroughs in laser spectrometry have allowed for faster, more efficient analyses of stable isotopic ratios in water samples. Commercially available instruments from Los Gatos Research and Picarro allow users to quickly analyze a wide range of samples, from seawater to groundwater, with accurate isotope ratios of D/H to within ± 0.2 ‰ and 18O/16O to within ± 0.03 ‰. While these instruments have increased the efficiency of stable isotope laboratories, they come with some major limitations, such as not being able to analyze hypersaline waters. The Los Gatos Research Liquid Water Isotope Analyzer (LWIA) can accurately and consistently measure the stable isotope ratios in waters with salinities ranging from 0 to 4 grams per liter (0 to 40 parts per thousand). In order to analyze water samples with salinities greater than 4 grams per liter, however, it was necessary to develop a consistent method through which to reduce salinity while causing as little fractionation as possible. Using a consistent distillation method, predictable fractionation of δ 18O and δ 2 H values was found to occur. This fractionation occurs according to a linear relationship with respect to the percent yield of the water in the sample. Using this method, samples with high salinity can be analyzed using laser spectrometry instruments, thereby enabling laboratories with Los Gatos or Picarro instruments to analyze those samples in house without having to dilute them using labor-intensive in-house standards or expensive premade standards.
NASA Astrophysics Data System (ADS)
Yeom, J. M.; Kim, H. O.
2014-12-01
In this study, we estimated the rice paddy yield with moderate geostationary satellite based vegetation products and GRAMI model over South Korea. Rice is the most popular staple food for Asian people. In addition, the effects of climate change are getting stronger especially in Asian region, where the most of rice are cultivated. Therefore, accurate and timely prediction of rice yield is one of the most important to accomplish food security and to prepare natural disasters such as crop defoliation, drought, and pest infestation. In the present study, GOCI, which is world first Geostationary Ocean Color Image, was used for estimating temporal vegetation indices of the rice paddy by adopting atmospheric correction BRDF modeling. For the atmospheric correction with LUT method based on Second Simulation of the Satellite Signal in the Solar Spectrum (6S), MODIS atmospheric products such as MOD04, MOD05, MOD07 from NASA's Earth Observing System Data and Information System (EOSDIS) were used. In order to correct the surface anisotropy effect, Ross-Thick Li-Sparse Reciprocal (RTLSR) BRDF model was performed at daily basis with 16day composite period. The estimated multi-temporal vegetation images was used for crop classification by using high resolution satellite images such as Rapideye, KOMPSAT-2 and KOMPSAT-3 to extract the proportional rice paddy area in corresponding a pixel of GOCI. In the case of GRAMI crop model, initial conditions are determined by performing every 2 weeks field works at Chonnam National University, Gwangju, Korea. The corrected GOCI vegetation products were incorporated with GRAMI model to predict rice yield estimation. The predicted rice yield was compared with field measurement of rice yield.
NASA Technical Reports Server (NTRS)
George, K.; Wu, H.; Willingham, V.; Furusawa, Y.; Kawata, T.; Cucinotta, F. A.; Dicello, J. F. (Principal Investigator)
2001-01-01
PURPOSE: To investigate how cell-cycle delays in human peripheral lymphocytes affect the expression of complex chromosome damage in metaphase following high- and low-LET radiation exposure. MATERIALS AND METHODS: Whole blood was irradiated in vitro with a low and a high dose of 1 GeV u(-1) iron particles, 400MeV u(-1) neon particles or y-rays. Lymphocytes were cultured and metaphase cells were collected at different time points after 48-84h in culture. Interphase chromosomes were prematurely condensed using calyculin-A, either 48 or 72 h after exposure to iron particles or gamma-rays. Cells in first division were analysed using a combination of FISH whole-chromosome painting and DAPI/ Hoechst 33258 harlequin staining. RESULTS: There was a delay in expression of chromosome damage in metaphase that was LET- and dose-dependant. This delay was mostly related to the late emergence of complex-type damage into metaphase. Yields of damage in PCC collected 48 h after irradiation with iron particles were similar to values obtained from cells undergoing mitosis after prolonged incubation. CONCLUSION: The yield of high-LET radiation-induced complex chromosome damage could be underestimated when analysing metaphase cells collected at one time point after irradiation. Chemically induced PCC is a more accurate technique since problems with complicated cell-cycle delays are avoided.
George, K; Wu, H; Willingham, V; Furusawa, Y; Kawata, T; Cucinotta, F A
2001-02-01
To investigate how cell-cycle delays in human peripheral lymphocytes affect the expression of complex chromosome damage in metaphase following high- and low-LET radiation exposure. Whole blood was irradiated in vitro with a low and a high dose of 1 GeV u(-1) iron particles, 400MeV u(-1) neon particles or y-rays. Lymphocytes were cultured and metaphase cells were collected at different time points after 48-84h in culture. Interphase chromosomes were prematurely condensed using calyculin-A, either 48 or 72 h after exposure to iron particles or gamma-rays. Cells in first division were analysed using a combination of FISH whole-chromosome painting and DAPI/ Hoechst 33258 harlequin staining. There was a delay in expression of chromosome damage in metaphase that was LET- and dose-dependant. This delay was mostly related to the late emergence of complex-type damage into metaphase. Yields of damage in PCC collected 48 h after irradiation with iron particles were similar to values obtained from cells undergoing mitosis after prolonged incubation. The yield of high-LET radiation-induced complex chromosome damage could be underestimated when analysing metaphase cells collected at one time point after irradiation. Chemically induced PCC is a more accurate technique since problems with complicated cell-cycle delays are avoided.
A real-time hybrid neuron network for highly parallel cognitive systems.
Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene
2016-08-01
For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.
Refining metabolic models and accounting for regulatory effects.
Kim, Joonhoon; Reed, Jennifer L
2014-10-01
Advances in genome-scale metabolic modeling allow us to investigate and engineer metabolism at a systems level. Metabolic network reconstructions have been made for many organisms and computational approaches have been developed to convert these reconstructions into predictive models. However, due to incomplete knowledge these reconstructions often have missing or extraneous components and interactions, which can be identified by reconciling model predictions with experimental data. Recent studies have provided methods to further improve metabolic model predictions by incorporating transcriptional regulatory interactions and high-throughput omics data to yield context-specific metabolic models. Here we discuss recent approaches for resolving model-data discrepancies and building context-specific metabolic models. Once developed highly accurate metabolic models can be used in a variety of biotechnology applications. Copyright © 2014 Elsevier Ltd. All rights reserved.
Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo
2017-05-01
Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.
28nm node process optimization: a lithography centric view
NASA Astrophysics Data System (ADS)
Seltmann, Rolf
2014-10-01
Many experts claim that the 28nm technology node will be the most cost effective technology node forever. This results from primarily from the cost of manufacturing due to the fact that 28nm is the last true Single Patterning (SP) node. It is also affected by the dramatic increase of design costs and the limited shrink factor of the next following nodes. Thus, it is assumed that this technology still will be alive still for many years. To be cost competitive, high yields are mandatory. Meanwhile, leading edge foundries have optimized the yield of the 28nm node to such a level that that it is nearly exclusively defined by random defectivity. However, it was a long way to go to come to that level. In my talk I will concentrate on the contribution of lithography to this yield learning curve. I will choose a critical metal patterning application. I will show what was needed to optimize the process window to a level beyond the usual OPC model work that was common on previous nodes. Reducing the process (in particular focus) variability is a complementary need. It will be shown which improvements were needed in tooling, process control and design-mask-wafer interaction to remove all systematic yield detractors. Over the last couple of years new scanner platforms were introduced that were targeted for both better productivity and better parametric performance. But this was not a clear run-path. It needed some extra affords of the tool suppliers together with the Fab to bring the tool variability down to the necessary level. Another important topic to reduce variability is the interaction of wafer none-planarity and lithography optimization. Having an accurate knowledge of within die topography is essential for optimum patterning. By completing both the variability reduction work and the process window enhancement work we were able to transfer the original marginal process budget to a robust positive budget and thus ensuring high yield and low costs.
NASA Astrophysics Data System (ADS)
Knochenmuss, Richard
2015-08-01
The Coupled Chemical and Physical Dynamics (CPCD) model of matrix assisted laser desorption ionization has been restricted to relative rather than absolute yield comparisons because the rate constant for one step in the model was not accurately known. Recent measurements are used to constrain this constant, leading to good agreement with experimental yield versus fluence data for 2,5-dihydroxybenzoic acid. Parameters for alpha-cyano-4-hydroxycinnamic acid are also estimated, including contributions from a possible triplet state. The results are compared with the polar fluid model, the CPCD is found to give better agreement with the data.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.
2016-01-01
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619
Self-Calibrating Respiratory-Flowmeter Combination
NASA Technical Reports Server (NTRS)
Westenskow, Dwayne R.; Orr, Joseph A.
1990-01-01
Dual flowmeters ensure accuracy over full range of human respiratory flow rates. System for measurement of respiratory flow employs two flowmeters; one compensates for deficiencies of other. Combination yields easily calibrated system accurate over wide range of gas flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Satellite techniques yield insight into devastating rainfall from Hurricane Mitch
NASA Astrophysics Data System (ADS)
Ferraro, R.; Vicente, G.; Ba, M.; Gruber, A.; Scofield, R.; Li, Q.; Weldon, R.
Hurricane Mitch may prove to be one of the most devastating tropical cyclones to affect the western hemisphere. Heavy rains over Central America from October 28, 1998, to November 1, 1998, caused widespread flooding and mud slides in Nicaragua and Honduras resulting in thousands of deaths and missing persons. News reports indicated entire towns being swept away, destruction of national economies and infrastructure, and widespread disease in the aftermath of the storm, which some estimates suggested dropped as much as 1300 mm of rain.However, in view of the widespread damage it is difficult to determine the actual amounts and distribution of rainfall. More accurate means of determining the rainfall associated with Mitch are vital for diagnosing and understanding the evolution of this disaster and for developing new mitigation strategies for future tropical cyclones. Satellite data may prove to be a reliable resource for accurate rainfall analysis and have yielded apparently reliable figures for Hurricane Mitch.
Analytical Wave Functions for Ultracold Collisions.
NASA Astrophysics Data System (ADS)
Cavagnero, M. J.
1998-05-01
Secular perturbation theory of long-range interactions(M. J. Cavagnero, PRA 50) 2841, (1994). has been generalized to yield accurate wave functions for near threshold processes, including low-energy scattering processes of interest at ultracold temperatures. In particular, solutions of Schrödinger's equation have been obtained for motion in the combined r-6, r-8, and r-10 potentials appropriate for describing an utlracold collision of two neutral ground state atoms. Scattering lengths and effective ranges appropriate to such potentials are readily calculated at distances comparable to the LeRoy radius, where exchange forces can be neglected, thereby eliminating the need to integrate Schrödinger's equation to large internuclear distances. Our method yields accurate base pair solutions well beyond the energy range of effective range theories, making possible the application of multichannel quantum defect theory [MQDT] and R-matrix methods to the study of ultracold collisions.
Accuracy of least-squares methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bochev, Pavel B.; Gunzburger, Max D.
1993-01-01
Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.
A Streaming Language Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Barth, Timothy; Knight, Timothy
2005-01-01
We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.
Yield estimation of corn with multispectral data and the potential of using imaging spectrometers
NASA Astrophysics Data System (ADS)
Bach, Heike
1997-05-01
In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.
Murrell, Ebony G.; Juliano, Steven A.
2012-01-01
Resource competition theory predicts that R*, the equilibrium resource amount yielding zero growth of a consumer population, should predict species' competitive abilities for that resource. This concept has been supported for unicellular organisms, but has not been well-tested for metazoans, probably due to the difficulty of raising experimental populations to equilibrium and measuring population growth rates for species with long or complex life cycles. We developed an index (Rindex) of R* based on demography of one insect cohort, growing from egg to adult in a non-equilibrium setting, and tested whether Rindex yielded accurate predictions of competitive abilities using mosquitoes as a model system. We estimated finite rate of increase (λ′) from demographic data for cohorts of three mosquito species raised with different detritus amounts, and estimated each species' Rindex using nonlinear regressions of λ′ vs. initial detritus amount. All three species' Rindex differed significantly, and accurately predicted competitive hierarchy of the species determined in simultaneous pairwise competition experiments. Our Rindex could provide estimates and rigorous statistical comparisons of competitive ability for organisms for which typical chemostat methods and equilibrium population conditions are impractical. PMID:22970128
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution.more » The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
Iacovino, Kayla; Ju-Song, Kim; Sisson, Thomas W.; Lowenstern, Jacob B.; Ku-Hun, Ri; Jong-Nam, Jang; Kun-Ho, Song; Song-Hwan, Ham; Clive Oppenheimer,; James O.S. Hammond,; Amy Donovan,; Kosima Weber-Liu,; Kum-Ran , Ryu
2016-01-01
Paektu volcano (Changbaishan) is a rhyolitic caldera that straddles the border between the Democratic People's Republic of Korea (DPRK) and China. Its most recent large eruption was the Millennium Eruption (ME; 23 km3 DRE) circa 946 CE, which resulted in the release of copious magmatic volatiles (H2O, CO2, sulfur, and halogens). Accurate quantification of volatile yield and composition is critical in assessing volcanogenic climate impacts but is elusive, particularly for pre-historic or unmonitored eruptions. Here we employ a geochemical technique to quantify volatile composition and yield from the ME by examining trends in incompatible trace and volatile element concentrations in crystal-hosted melt inclusions. We estimate a maximum of 45 Tg S was injected into the stratosphere during the ME. If true yields are close to this maximum, this equates to more than 1.5 times the S released during the 1815 eruption of Tambora, which contributed to the "Year Without a Summer". Our maximum gas yield estimates place the ME among the strongest emitters of climate forcing gases in recorded human history in stark contrast to ice core records that indicate minimal atmospheric sulfate loading after the eruption. We conclude that the potential lack of strong climate forcing occurred in spite of the substantial S yield and suggest that other factors predominated in minimizing climatic effects. This paradoxical case in which high S emissions do not result in substantial climate forcing may present a way forward in building more generalized models for predicting which volcanic eruptions will produce large climate impacts.
Wray, Tyler B; Kahler, Christopher W; Monti, Peter M
2016-10-01
MSM continue to represent the largest share of new HIV infections in the United States each year due to high infectivity associated with unprotected anal sex. Ecological momentary assessment (EMA) has the potential to provide a unique view of how high-risk sexual events occur in the real world and can impart detailed information about aspects of decision-making, antecedents, and consequences that accompany these events. EMA may also produce more accurate data on sexual behavior by assessing it soon after its occurrence. We conducted a study involving 12 high-risk MSM to explore the acceptability and feasibility of a 30 day, intensive EMA procedure. Results suggest this intensive assessment strategy was both acceptable and feasible to participants. All participants provided response rates to various assessments that approached or were in excess of their targets: 81.0 % of experience sampling assessments and 93.1 % of daily diary assessments were completed. However, comparing EMA reports with a Timeline Followback (TLFB) of the same 30 day period suggested that participants reported fewer sexual risk events on the TLFB compared to EMA, and reported a number of discrepancies about specific behaviors and partner characteristics across the two methods. Overall, results support the acceptability, feasibility, and utility of using EMA to understand sexual risk events among high-risk MSM. Findings also suggest that EMA and other intensive longitudinal assessment approaches could yield more accurate data about sex events.
Mixing problems in using indicators for measuring regional blood flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushioda, E.; Nuwayhid, B.; Tabsh, K.
A basic requirement for using indicators for measuring blood flow is adequate mixing of the indicator with blood prior to sampling the site. This requirement has been met by depositing the indicator in the heart and sampling from an artery. Recently, authors have injected microspheres into veins and sampled from venous sites. The present studies were designed to investigate the mixing problems in sheep and rabbits by means of Cardio-Green and labeled microspheres. The indicators were injected at different points in the circulatory system, and blood was sampled at different levels of the venous and arterial systems. Results show themore » following: (a) When an indicator of small molecular size (Cardio-Green) is allowed to pass through the heart chambers, adequate mixing is achieved, yielding accurate and reproducible results. (b) When any indicator (Cardio-Green or microspheres) is injected into veins, and sampling is done at any point in the venous system, mixing is inadequate, yielding flow results which are inconsistent and erratic. (c) For an indicator or large molecular size (microspheres), injecting into the left side of the heart and sampling from arterial sites yield accurate and reproducible results regardless of whether blood is sampled continuously or intermittently.« less
Effects of sampling close relatives on some elementary population genetics analyses.
Wang, Jinliang
2018-01-01
Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.
2018-04-01
The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.
A translatable predictor of human radiation exposure.
Lucas, Joseph; Dressman, Holly K; Suchindran, Sunil; Nakamura, Mai; Chao, Nelson J; Himburg, Heather; Minor, Kerry; Phillips, Gary; Ross, Joel; Abedi, Majid; Terbrueggen, Robert; Chute, John P
2014-01-01
Terrorism using radiological dirty bombs or improvised nuclear devices is recognized as a major threat to both public health and national security. In the event of a radiological or nuclear disaster, rapid and accurate biodosimetry of thousands of potentially affected individuals will be essential for effective medical management to occur. Currently, health care providers lack an accurate, high-throughput biodosimetric assay which is suitable for the triage of large numbers of radiation injury victims. Here, we describe the development of a biodosimetric assay based on the analysis of irradiated mice, ex vivo-irradiated human peripheral blood (PB) and humans treated with total body irradiation (TBI). Interestingly, a gene expression profile developed via analysis of murine PB radiation response alone was inaccurate in predicting human radiation injury. In contrast, generation of a gene expression profile which incorporated data from ex vivo irradiated human PB and human TBI patients yielded an 18-gene radiation classifier which was highly accurate at predicting human radiation status and discriminating medically relevant radiation dose levels in human samples. Although the patient population was relatively small, the accuracy of this classifier in discriminating radiation dose levels in human TBI patients was not substantially confounded by gender, diagnosis or prior exposure to chemotherapy. We have further incorporated genes from this human radiation signature into a rapid and high-throughput chemical ligation-dependent probe amplification assay (CLPA) which was able to discriminate radiation dose levels in a pilot study of ex vivo irradiated human blood and samples from human TBI patients. Our results illustrate the potential for translation of a human genetic signature for the diagnosis of human radiation exposure and suggest the basis for further testing of CLPA as a candidate biodosimetric assay.
NASA Astrophysics Data System (ADS)
Karton, Amir; Martin, Jan M. L.
2012-10-01
Accurate isomerization energies are obtained for a set of 45 C8H8 isomers by means of the high-level, ab initio W1-F12 thermochemical protocol. The 45 isomers involve a range of hydrocarbon functional groups, including (linear and cyclic) polyacetylene, polyyne, and cumulene moieties, as well as aromatic, anti-aromatic, and highly-strained rings. Performance of a variety of DFT functionals for the isomerization energies is evaluated. This proves to be a challenging test: only six of the 56 tested functionals attain root mean square deviations (RMSDs) below 3 kcal mol-1 (the performance of MP2), namely: 2.9 (B972-D), 2.8 (PW6B95), 2.7 (B3PW91-D), 2.2 (PWPB95-D3), 2.1 (ωB97X-D), and 1.2 (DSD-PBEP86) kcal mol-1. Isomers involving highly-strained fused rings or long cumulenic chains provide a 'torture test' for most functionals. Finally, we evaluate the performance of composite procedures (e.g. G4, G4(MP2), CBS-QB3, and CBS-APNO), as well as that of standard ab initio procedures (e.g. MP2, SCS-MP2, MP4, CCSD, and SCS-CCSD). Both connected triples and post-MP4 singles and doubles are important for accurate results. SCS-MP2 actually outperforms MP4(SDQ) for this problem, while SCS-MP3 yields similar performance as CCSD and slightly bests MP4. All the tested empirical composite procedures show excellent performance with RMSDs below 1 kcal mol-1.
Charton, C; Guinard-Flament, J; Lefebvre, R; Barbey, S; Gallard, Y; Boichard, D; Larroque, H
2018-03-01
Despite its potential utility for predicting cows' milk yield responses to once-daily milking (ODM), the genetic basis of cow milk trait responses to ODM has been scarcely if ever described in the literature, especially for short ODM periods. This study set out to (1) estimate the genetic determinism of milk yield and composition during a 3-wk ODM period, (2) estimate the genetic determinism of milk yield responses (i.e., milk yield loss upon switching cows to ODM and milk yield recovery upon switching them back to twice-daily milking; TDM), and (3) seek predictors of milk yield responses to ODM, in particular using the first day of ODM. Our trial used 430 crossbred Holstein × Normande cows and comprised 3 successive periods: 1 wk of TDM (control), 3 wk of ODM, and 2 wk of TDM. Implementing ODM for 3 wk reduced milk yield by 27.5% on average, and after resuming TDM cows recovered on average 57% of the milk lost. Heritability estimates in the TDM control period and 3-wk ODM period were, respectively, 0.41 and 0.35 for milk yield, 0.66 and 0.61 for milk fat content, 0.60 and 0.80 for milk protein content, 0.66 and 0.36 for milk lactose content, and 0.20 and 0.15 for milk somatic cell score content. Milk yield and composition during 3-wk ODM and TDM periods were genetically close (within-trait genetic correlations between experimental periods all exceeding 0.80) but were genetically closer within the same milking frequency. Heritabilities of milk yield loss observed upon switching cows to ODM (0.39 and 0.34 for milk yield loss in kg/d and %, respectively) were moderate and similar to milk yield heritabilities. Milk yield recovery (kg/d) upon resuming TDM was a trait of high heritability (0.63). Because they are easy to measure, TDM milk yield and composition and milk yield responses on the first day of ODM were investigated as predictors of milk yield responses to a 3-wk ODM to easily detect animals that are well adapted to ODM. Twice-daily milking milk yield and composition were found to be partly genetically correlated with milk yield responses but not closely enough for practical application. With genetic correlations of 0.98 and 0.96 with 3-wk ODM milk yield losses (in kg/d and %, respectively), milk yield losses on the first day of ODM proved to be more accurate in predicting milk yield responses on longer term ODM than TDM milk yield. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Rapid induction of false memory for pictures.
Weinstein, Yana; Shanks, David R
2010-07-01
Recognition of pictures is typically extremely accurate, and it is thus unclear whether the reconstructive nature of memory can yield substantial false recognition of highly individuated stimuli. A procedure for the rapid induction of false memories for distinctive colour photographs is proposed. Participants studied a set of object pictures followed by a list of words naming those objects, but embedded in the list were names of unseen objects. When subsequently shown full colour pictures of these unseen objects, participants consistently claimed that they had seen them, while discriminating with high accuracy between studied pictures and new pictures whose names did not appear in the misleading word list. These false memories can be reported with high confidence as well as the feeling of recollection. This new procedure allows the investigation of factors that influence false memory reports with ecologically valid stimuli and of the similarities and differences between true and false memories.
NASA Astrophysics Data System (ADS)
Amari, H.; Lari, L.; Zhang, H. Y.; Geelhaar, L.; Chèze, C.; Kappers, M. J.; McAleese, C.; Humphreys, C. J.; Walther, T.
2011-11-01
Since the band structure of group III- nitrides presents a direct electronic transition with a band-gap energy covering the range from 3.4 eV for (GaN) to 6.2 eV (for AlN) at room temperature as well as a high thermal conductivity, aluminium gallium nitride (AlGaN) is a strong candidate for high-power and high-temperature electronic devices and short-wavelength (visible and ultraviolet) optoelectronic devices. We report here a study by energy-filtered transmission electron microscopy (EFTEM) and energy-dispersive X-ray spectroscopy (EDXS) of the micro structure and elemental distribution in different aluminium gallium nitride epitaxial layers grown by different research groups. A calibration procedure is out-lined that yields the Al content from EDXS to within ~1 at % precision.
(abstract) Line Mixing Behavior of Hydrogen-Broadened Ammonia Under Jovian Atmospheric Conditions
NASA Technical Reports Server (NTRS)
Spilker, Thomas R.
1994-01-01
Laboratory spectral data reported last year have been used to investigate the line mixing behavior of hydrogen-broadened ammonia inversion lines. The data show that broadening parameters appearing in the modified Ben-Reuven opacity formalism of Berge and Gulkis (1976) cannot maintain constant values over pressure ranges that include low to moderate pressures and high pressures. Also, they cannot change drastically in value, as in the Spilker (1990) revision of the Berge and Gulkis formalism. It has long been recognized that at low pressures, less than about 1 bar of a Jovian atmospheric mixture, a VVW formalism yields more accurate predictions of ammonia opacity than Ben-Reuven formalisms. At higher pressures the Ben-Reuven formalisms are more accurate. Since the Ben-Reuven lineshape collapses to a VVW lineshape in the low pressure limit, this low pressure inaccuracy of the Ben-Reuven formalisms is surprising. By incorporating various behavior, a new formalism is produced that is more accurate than previous formalisms, particularly in the critical 'transition region' from 0.5 to 2 bars, and that can be used without discontinuity from pressures of zero to hundreds of bars. The new formalism will be useful in such applications as interpretation of radio astronomical and radio occultation data on giant planet atmospheres, and radiative transfer modeling of those atmospheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Implicit Space-Time Conservation Element and Solution Element Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen
1999-01-01
Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.
Terahertz spectroscopy of the 15NH2 amidogen radical
NASA Astrophysics Data System (ADS)
Margulès, L.; Martin-Drumel, M. A.; Pirali, O.; Bailleux, S.; Wlodarczak, G.; Roy, P.; Roueff, E.; Gerin, M.
2016-06-01
Context. The determination of isotopic ratios in interstellar molecules is a powerful probe of chemical routes leading to their formation. In particular, the 14N/15N abundance ratio of nitrogen-bearing species provides information on possible fractionation mechanisms. Up to now there is no accurate determination of this ratio in the interstellar medium (ISM) for the amidogen radical, NH2. Aims: This work is aimed at determining rotational frequencies of 15NH2 to enable its astronomical detection, which will help to understand the formation mechanisms of nitrogen hydrides in the ISM. Methods: We performed complementary measurements using both synchrotron-based, broadband far-infrared and high-resolution, submillimeter-wave frequencies to investigate the pure rotational spectrum of the 15NH2 species. Results: The first spectroscopic study of the 15N-isotopologue of the amidogen radical yielded an accurate set of molecular parameters. Conclusions: Accurate frequencies are now available for 15NH2 up to 7 THz (with N'' ≤ 13) allowing dedicated astronomical searches to be undertaken. Full Table 2 (S1) and fitting files (S2-S4) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A110
Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei
2015-12-28
Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Haines, Brian Michael; Grim, Gary P.; Fincke, James R.; ...
2016-07-29
Here, we present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employmore » any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long-wavelength asymmetries degrade TT yield more than the DT yield and thus bring DT/TT neutron yield ratios into agreement with experiment. Finally, we present a detailed comparison of the flows in 2D and 3D simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.
We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ anymore » adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long-wavelength asymmetries degrade TT yield more than the DT yield and thus bring DT/TT neutron yield ratios into agreement with experiment. Finally, we present a detailed comparison of the flows in 2D and 3D simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haines, Brian Michael; Grim, Gary P.; Fincke, James R.
Here, we present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employmore » any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long-wavelength asymmetries degrade TT yield more than the DT yield and thus bring DT/TT neutron yield ratios into agreement with experiment. Finally, we present a detailed comparison of the flows in 2D and 3D simulations.« less
NASA Astrophysics Data System (ADS)
Haines, Brian M.; Grim, Gary P.; Fincke, James R.; Shah, Rahul C.; Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B.
2016-07-01
We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a "CD Mixcap," is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long-wavelength asymmetries degrade TT yield more than the DT yield and thus bring DT/TT neutron yield ratios into agreement with experiment. Finally, we present a detailed comparison of the flows in 2D and 3D simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Dianyong; He Jun; Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000
In this work, we discuss the contribution of the mesonic loops to the decay rates of {chi}{sub c1{yields}{phi}{phi}}, {omega}{omega}, which are suppressed by the helicity selection rules and {chi}{sub c1{yields}{phi}{omega}}, which is a double-Okubo-Zweig-Iizuka forbidden process. We find that the mesonic loop effects naturally explain the clear signals of {chi}{sub c1{yields}{phi}{phi}}, {omega}{omega} decay modes observed by the BES Collaboration. Moreover, we investigate the effects of the {omega}-{phi} mixing, which may result in the order of magnitude of the branching ratio BR({chi}{sub c1{yields}{omega}{phi}}) being 10{sup -7}. Thus, we are waiting for the accurate measurements of the BR({chi}{sub c1{yields}{omega}{omega}}), BR({chi}{sub c1{yields}{phi}{phi}}), andmore » BR({chi}{sub c1{yields}{omega}{phi}}), which may be very helpful for testing the long-distant contribution and the {omega}-{phi} mixing in {chi}{sub c1{yields}{phi}{phi}}, {omega}{omega}, {omega}{phi} decays.« less
Sediment Transport in Streams in the Umpqua River Basin, Oregon
Onions, C. A.
1969-01-01
This report presents tables of suspended-sediment data collected from 1956 to 1967 at 10 sites in the Umpqua River basin. Computations based on these data indicate that average annual suspended-sediment yields at these sites range from 137 to 822 tons per square mile. Because available data for the Umpqua River basin are generally inadequate for accurate determinations of sediment yield and for the definition of characteristics of fluvial sediments, recommendations are made for the collection and analysis of additional sediment data.
NASA Astrophysics Data System (ADS)
Poškus, A.
2016-09-01
This paper evaluates the accuracy of the single-event (SE) and condensed-history (CH) models of electron transport in MCNP6.1 when simulating characteristic Kα, total K (=Kα + Kβ) and Lα X-ray emission from thick targets bombarded by electrons with energies from 5 keV to 30 keV. It is shown that the MCNP6.1 implementation of the CH model for the K-shell impact ionization leads to underestimation of the K yield by 40% or more for the elements with atomic numbers Z < 15 and overestimation of the Kα yield by more than 40% for the elements with Z > 25. The Lα yields are underestimated by more than an order of magnitude in CH mode, because MCNP6.1 neglects X-ray emission caused by electron-impact ionization of L, M and higher shells in CH mode (the Lα yields calculated in CH mode reflect only X-ray fluorescence, which is mainly caused by photoelectric absorption of bremsstrahlung photons). The X-ray yields calculated by MCNP6.1 in SE mode (using ENDF/B-VII.1 library data) are more accurate: the differences of the calculated and experimental K yields are within the experimental uncertainties for the elements C, Al and Si, and the calculated Kα yields are typically underestimated by (20-30)% for the elements with Z > 25, whereas the Lα yields are underestimated by (60-70)% for the elements with Z > 49. It is also shown that agreement of the experimental X-ray yields with those calculated in SE mode is additionally improved by replacing the ENDF/B inner-shell electron-impact ionization cross sections with the set of cross sections obtained from the distorted-wave Born approximation (DWBA), which are also used in the PENELOPE code system. The latter replacement causes a decrease of the average relative difference of the experimental X-ray yields and the simulation results obtained in SE mode to approximately 10%, which is similar to accuracy achieved with PENELOPE. This confirms that the DWBA inner-shell impact ionization cross sections are significantly more accurate than the corresponding ENDF/B cross sections when energy of incident electrons is of the order of the binding energy.
Apportioning riverine DIN load to export coefficients of land uses in an urbanized watershed.
Shih, Yu-Ting; Lee, Tsung-Yu; Huang, Jr-Chuan; Kao, Shuh-Ji; Chang
2016-08-01
The apportionment of riverine dissolved inorganic nitrogen (DIN) load to individual land use on a watershed scale demands the support of accurate DIN load estimation and differentiation of point and non-point sources, but both of them are rarely quantitatively determined in small montane watersheds. We introduced the Danshui River watershed of Taiwan, a mountainous urbanized watershed, to determine the export coefficients via a reverse Monte Carlo approach from riverine DIN load. The results showed that the dynamics of N fluctuation determines the load estimation method and sampling frequency. On a monthly sampling frequency basis, the average load estimation of the methods (GM, FW, and LI) outperformed that of individual method. Export coefficient analysis showed that the forest DIN yield of 521.5kg-Nkm(-2)yr(-1) was ~2.7-fold higher than the global riverine DIN yield (mainly from temperate large rivers with various land use compositions). Such a high yield was attributable to high rainfall and atmospheric N deposition. The export coefficient of agriculture was disproportionately larger than forest suggesting that a small replacement of forest to agriculture could lead to considerable change of DIN load. The analysis of differentiation between point and non-point sources showed that the untreated wastewater (non-point source), accounting for ~93% of the total human-associated wastewater, resulted in a high export coefficient of urban. The inclusion of the treated and untreated wastewater completes the N budget of wastewater. The export coefficient approach serves well to assess the riverine DIN load and to improve the understanding of N cascade. Copyright © 2016 Elsevier B.V. All rights reserved.
The Bolivian "Altiplano" and "Valle" sheep are two different peripatric breeds.
Parés-Casanova, Pere M; Pérezgrovas Garza, Raúl
2014-06-01
Forty-nine sheep belonged to the Andean Altiplano region ("Altiplano") and 30 in the lowland regions of Bolivia ("Valle"), aged 1 to 4 years, were wool sampled to determine the extent of difference between these local breeds. Fibre length and the percentage of each type of fibre (long-thick, short-thin and kemp), yield and fibre diameter were measured. There was a highly significant difference between the two sheep populations that were not clearly separated in the first two principal component of a principal components analysis (PC); the first PC explained 67.1 % and the second PC explained 26.6 % of the total variation. The variables that contributed most to the separation of the sheep populations were the percentage of long-thick and short-thin fibres in the first PC and yield in the second PC. A discriminant analysis, which was used to classify individuals with respect to their breeding, achieved an accurate classification rate of 84.2 %. Thus, the Altiplano and Valle sheep must be viewed as two closely peripatric breeds rather than different "ecotypes", as more than 80 % could be correctly assigned to one of the breeds; however, the differences are based on composition of long-thick and short-thin fibres and yield after alcohol scouring.
On the Yield Strength of Oceanic Lithosphere
NASA Astrophysics Data System (ADS)
Jain, C.; Korenaga, J.; Karato, S. I.
2017-12-01
The origin of plate tectonic convection on Earth is intrinsically linked to the reduction in the strength of oceanic lithosphere at plate boundaries. A few mechanisms, such as deep thermal cracking [Korenaga, 2007] and strain localization due to grain-size reduction [e.g., Ricard and Bercovici, 2009], have been proposed to explain this reduction in lithospheric strength, but the significance of these mechanisms can be assessed only if we have accurate estimates on the strength of the undamaged oceanic lithosphere. The Peierls mechanism is likely to govern the rheology of old oceanic lithosphere [Kohlstedt et al., 1995], but the flow-law parameters for the Peierls mechanism suggested by previous studies do not agree with each other. We thus reanalyze the relevant experimental deformation data of olivine aggregates using Markov chain Monte Carlo inversion, which can handle the highly nonlinear constitutive equation of the Peierls mechanism [Korenaga and Karato, 2008; Mullet et al., 2015]. Our inversion results indicate nontrivial nonuniqueness in every flow-law parameter for the Peierls mechanism. Moreover, the resultant flow laws, all of which are consistent with the same experimental data, predict substantially different yield stresses under lithospheric conditions and could therefore have different implications for the origin of plate tectonics. We discuss some future directions to improve our constraints on lithospheric yield strength.
Application of activated barrier hopping theory to viscoplastic modeling of glassy polymers
NASA Astrophysics Data System (ADS)
Sweeney, J.; Spencer, P. E.; Vgenopoulos, D.; Babenko, M.; Boutenel, F.; Caton-Rose, P.; Coates, P. D.
2018-05-01
An established statistical mechanical theory of amorphous polymer deformation has been incorporated as a plastic mechanism into a constitutive model and applied to a range of polymer mechanical deformations. The temperature and rate dependence of the tensile yield of PVC, as reported in early studies, has been modeled to high levels of accuracy. Tensile experiments on PET reported here are analyzed similarly and good accuracy is also achieved. The frequently observed increase in the gradient of the plot of yield stress against logarithm of strain rate is an inherent feature of the constitutive model. The form of temperature dependence of the yield that is predicted by the model is found to give an accurate representation. The constitutive model is developed in two-dimensional form and implemented as a user-defined subroutine in the finite element package ABAQUS. This analysis is applied to the tensile experiments on PET, in some of which strain is localized in the form of shear bands and necks. These deformations are modeled with partial success, though adiabatic heating of the instability causes inaccuracies for this isothermal implementation of the model. The plastic mechanism has advantages over the Eyring process, is equally tractable, and presents no particular difficulties in implementation with finite elements.
Effects of alternative cropping systems on globe artichoke qualitative traits.
Spanu, Emanuela; Deligios, Paola A; Azara, Emanuela; Delogu, Giovanna; Ledda, Luigi
2018-02-01
Traditionally, globe artichoke cultivation in the Mediterranean basin is based on monoculture and on use of high amounts of nitrogen fertiliser. This raises issues regarding its compatibility with sustainable agriculture. We studied the effect of one typical conventional (CONV) and two alternative cropping systems [globe artichoke in sequence with French bean (NCV1), or in biannual rotation (NCV2) with cauliflower and with a leguminous cover crop in inter-row spaces] on yield, polyphenol and mineral content of globe artichoke heads over two consecutive growing seasons. NCV2 showed statistical differences in terms of fresh product yield with respect to the monoculture systems. In addition, the dihydroxycinnamic acids and dicaffeoylquinic acids of non-conventional samples were one-fold significantly higher than the conventional one. All the samples reported good mineral content, although NCV2 achieved a higher Fe content than conventional throughout the two seasons. After two and three dates of sampling, the CONV samples showed the highest levels of K content. In our study, an acceptable commercial yield and quality of 'Spinoso sardo' were achieved by shifting the common conventional agronomic management to more sustainable ones, by means of an accurate choice of cover crop species and rotations introduced in the systems. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Application of activated barrier hopping theory to viscoplastic modeling of glassy polymers
NASA Astrophysics Data System (ADS)
Sweeney, J.; Spencer, P. E.; Vgenopoulos, D.; Babenko, M.; Boutenel, F.; Caton-Rose, P.; Coates, P. D.
2017-10-01
An established statistical mechanical theory of amorphous polymer deformation has been incorporated as a plastic mechanism into a constitutive model and applied to a range of polymer mechanical deformations. The temperature and rate dependence of the tensile yield of PVC, as reported in early studies, has been modeled to high levels of accuracy. Tensile experiments on PET reported here are analyzed similarly and good accuracy is also achieved. The frequently observed increase in the gradient of the plot of yield stress against logarithm of strain rate is an inherent feature of the constitutive model. The form of temperature dependence of the yield that is predicted by the model is found to give an accurate representation. The constitutive model is developed in two-dimensional form and implemented as a user-defined subroutine in the finite element package ABAQUS. This analysis is applied to the tensile experiments on PET, in some of which strain is localized in the form of shear bands and necks. These deformations are modeled with partial success, though adiabatic heating of the instability causes inaccuracies for this isothermal implementation of the model. The plastic mechanism has advantages over the Eyring process, is equally tractable, and presents no particular difficulties in implementation with finite elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznauryan, I.G.; Burkert, V.D.; Egiyan, H.
2005-01-01
Using two approaches - dispersion relations and the isobar model - we have analyzed recent high precision CLAS data on cross sections of {pi}{sup 0}, {pi}{sup +}, and {eta} electroproduction on protons, and the longitudinally polarized electron beam asymmetry for p(e{sup {yields}},e{sup '}p){pi}{sup 0} and p(e{sup {yields}},e{sup '}n){pi}{sup +}. The contributions of the resonances P{sub 33}(1232), P{sub 11}(1440), D{sub 13}(1520), and S{sub 11}(1535) to {pi} electroproduction and of S{sub 11}(1535) to {eta} electroproduction are found. The results obtained using the two approaches are in good agreement. There is also good agreement between amplitudes of the {gamma}*N{yields}S{sub 11}(1535) transition found inmore » {pi} and {eta} electroproduction. For the first time accurate results are obtained for the longitudinal amplitudes of the P{sub 11}(1440), D{sub 13}(1520), and S{sub 11}(1535) electroexcitations on protons. A strong longitudinal response is found for the Roper resonance, which rules out presentation of this resonance as a hybrid state.« less
Pacheco-Ruiz, Santiago; Heaven, Sonia; Banks, Charles J
2017-05-01
Kinetic control of Mean Cell Residence Time (MCRT) was shown to have a significant impact on membrane flux under steady-state conditions. Two laboratory-scale flat-plate submerged anaerobic membrane bioreactors were operated for 245 days on a low-to-intermediate strength substrate with high suspended solids. Transmembrane pressure was maintained at 2.2 kPa throughout four experimental phases, while MCRT in one reactor was progressively reduced. This allowed very accurate measurement of sustainable membrane flux rates at different MCRTs, and hence the degree of membrane fouling. Performance data were gathered on chemical oxygen demand (COD) removal efficiency, and a COD mass balance was constructed accounting for carbon converted into new biomass and that lost in the effluent as dissolved methane. Measurements of growth yield at each MCRT were made, with physical characterisation of each mixed liquor based on capillary suction time. The results showed membrane flux and MLSS filterability was highest at short MCRT, although specific methane production (SMP) was lower since a proportion of COD removal was accounted for by higher biomass yield. There was no advantage in operating at an MCRT <25 days. When considering the most suitable MCRT there is thus a trade-off between membrane performance, SMP and waste sludge yield.
2013-01-01
Background Accurate prediction of Helicobacter pylori infection status on endoscopic images can contribute to early detection of gastric cancer, especially in Asia. We identified the diagnostic yield of endoscopy for H. pylori infection at various endoscopist career levels and the effect of two years of training on diagnostic yield. Methods A total of 77 consecutive patients who underwent endoscopy were analyzed. H. pylori infection status was determined by histology, serology, and the urea breast test and categorized as H. pylori-uninfected, -infected, or -eradicated. Distinctive endoscopic findings were judged by six physicians at different career levels: beginner (<500 endoscopies), intermediate (1500–5000), and advanced (>5000). Diagnostic yield and inter- and intra-observer agreement on H. pylori infection status were evaluated. Values were compared between the two beginners after two years of training. The kappa (K) statistic was used to calculate agreement. Results For all physicians, the diagnostic yield was 88.9% for H. pylori-uninfected, 62.1% for H. pylori-infected, and 55.8% for H. pylori-eradicated. Intra-observer agreement for H. pylori infection status was good (K > 0.6) for all physicians, while inter-observer agreement was lower (K = 0.46) for beginners than for intermediate and advanced (K > 0.6). For all physicians, good inter-observer agreement in endoscopic findings was seen for atrophic change (K = 0.69), regular arrangement of collecting venules (K = 0.63), and hemorrhage (K = 0.62). For beginners, the diagnostic yield of H. pylori-infected/eradicated status and inter-observer agreement of endoscopic findings were improved after two years of training. Conclusions The diagnostic yield of endoscopic diagnosis was high for H. pylori-uninfected cases, but was low for H. pylori-eradicated cases. In beginners, daily training on endoscopic findings improved the low diagnostic yield. PMID:23947684
Water resources of Bannock Creek basin, southeastern Idaho
Spinazola, Joseph M.; Higgs, B.D.
1997-01-01
The potential for development of water resources in the Bannock Creek Basin is limited by water supply. Bannock Creek Basin covers 475 square miles in southeastern Idaho. Shoshone-Bannock tribal lands on the Fort Hall Indian Reservation occupy the northern part of the basin; the remainder of the basin is privately owned. Only a small amount of information on the hydrologic and water-quality characteristics of Bannock Creek Basin is available, and two previous estimates of water yield from the basin ranged widely from 45,000 to 132,500 acre-feet per year. The Shoshone-Bannock Tribes need an accurate determination of water yield and baseline water-quality characteristics to plan and implement a sustainable level of water use in the basin. Geologic setting, quantities of precipitation, evapotranspiration, surface-water runoff, recharge, and ground-water underflow were used to determine water yield in the basin. Water yield is the annual amount of surface and ground water available in excess of evapotranspiration by crops and native vegetation. Water yield from Bannock Creek Basin was affected by completion of irrigation projects in 1964. Average 1965-89 water yield from five subbasins in Bannock Creek Basin determined from water budgets was 60,600 acre-feet per year. Water yield from the Fort Hall Indian Reservation part of Bannock Creek Basin was estimated to be 37,700 acre-feet per year. Water from wells, springs, and streams is a calcium bicarbonate type. Concentrations of dissolved nitrite plus nitrate as nitrogen and fluoride were less than Maximum Contaminant Levels for public drinking-water supplies established by the U.S. Environmental Protection Agency. Large concentrations of chloride and nitrogen in water from several wells, springs, and streams likely are due to waste from septic tanks or stock animals. Estimated suspended-sediment load near the mouth of Bannock Creek was 13,300 tons from December 1988 through July 1989. Suspended-sediment discharge was greatest during periods of high streamflow.
Supernovae Discovery Efficiency
NASA Astrophysics Data System (ADS)
John, Colin
2018-01-01
Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madar, Inamul Hasan; Ko, Seung-Ik; Kim, Hokeun
Mass spectrometry (MS)-based proteomics, which uses high-resolution hybrid mass spectrometers such as the quadrupole-orbitrap mass spectrometer, can yield tens of thousands of tandem mass (MS/MS) spectra of high resolution during a routine bottom-up experiment. Despite being a fundamental and key step in MS-based proteomics, the accurate determination and assignment of precursor monoisotopic masses to the MS/MS spectra remains difficult. The difficulties stem from imperfect isotopic envelopes of precursor ions, inaccurate charge states for precursor ions, and cofragmentation. We describe a composite method of utilizing MS data to assign accurate monoisotopic masses to MS/MS spectra, including those subject to cofragmentation. Themore » method, “multiplexed post-experiment monoisotopic mass refinement” (mPE-MMR), consists of the following: multiplexing of precursor masses to assign multiple monoisotopic masses of cofragmented peptides to the corresponding multiplexed MS/MS spectra, multiplexing of charge states to assign correct charges to the precursor ions of MS/ MS spectra with no charge information, and mass correction for inaccurate monoisotopic peak picking. When combined with MS-GF+, a database search algorithm based on fragment mass difference, mPE-MMR effectively increases both sensitivity and accuracy in peptide identification from complex high-throughput proteomics data compared to conventional methods.« less
Analysis on the application of background parameters on remote sensing classification
NASA Astrophysics Data System (ADS)
Qiao, Y.
Drawing accurate crop cultivation acreage, dynamic monitoring of crops growing and yield forecast are some important applications of remote sensing to agriculture. During the 8th 5-Year Plan period, the task of yield estimation using remote sensing technology for the main crops in major production regions in China once was a subtopic to the national research task titled "Study on Application of Remote sensing Technology". In 21 century in a movement launched by Chinese Ministry of Agriculture to combine high technology to farming production, remote sensing has given full play to farm crops' growth monitoring and yield forecast. And later in 2001 Chinese Ministry of Agriculture entrusted the Northern China Center of Agricultural Remote Sensing to forecast yield of some main crops like wheat, maize and rice in rather short time to supply information for the government decision maker. Present paper is a report for this task. It describes the application of background parameters in image recognition, classification and mapping with focuses on plan of the geo-science's theory, ecological feature and its cartographical objects or scale, the study of phrenology for image optimal time for classification of the ground objects, the analysis of optimal waveband composition and the application of background data base to spatial information recognition ;The research based on the knowledge of background parameters is indispensable for improving the accuracy of image classification and mapping quality and won a secondary reward of tech-science achievement from Chinese Ministry of Agriculture. Keywords: Spatial image; Classification; Background parameter
An Alternative to the Ionic Model
ERIC Educational Resources Information Center
Sanderson, R. T.
1975-01-01
Describes the "coordinated polymeric model," which yields more accurate energy calculations than the "ionic model" for compounds which exhibit considerable covalency. The dichotomy between ionic and covalent bonding is thus largely broken down for solids which are nonmolecular in the crystalline state. (MLH)
Precision Agriculture. Reaping the Benefits of Technological Growth. Resources in Technology.
ERIC Educational Resources Information Center
Hadley, Joel F.
1998-01-01
Technological innovations have revolutionized farming. Using precision farming techniques, farmers get an accurate picture of a field's attributes, such as soil properties, yield rates, and crop characteristics through the use of Differential Global Positioning Satellite hardware. (JOW)
A cross-correlation-based estimate of the galaxy luminosity function
NASA Astrophysics Data System (ADS)
van Daalen, Marcel P.; White, Martin
2018-06-01
We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
Ionic transport in high-energy-density matter
Stanton, Liam G.; Murillo, Michael S.
2016-04-08
Ionic transport coefficients for dense plasmas have been numerically computed using an effective Boltzmann approach. Here, we developed a simplified effective potential approach that yields accurate fits for all of the relevant cross sections and collision integrals. These results have been validated with molecular-dynamics simulations for self-diffusion, interdiffusion, viscosity, and thermal conductivity. Molecular dynamics has also been used to examine the underlying assumptions of the Boltzmann approach through a categorization of behaviors of the velocity autocorrelation function in the Yukawa phase diagram. By using a velocity-dependent screening model, we examine the role of dynamical screening in transport. Implications of thesemore » results for Coulomb logarithm approaches are discussed.« less
Complex Chemical Reaction Networks from Heuristics-Aided Quantum Chemistry.
Rappoport, Dmitrij; Galvin, Cooper J; Zubarev, Dmitry Yu; Aspuru-Guzik, Alán
2014-03-11
While structures and reactivities of many small molecules can be computed efficiently and accurately using quantum chemical methods, heuristic approaches remain essential for modeling complex structures and large-scale chemical systems. Here, we present a heuristics-aided quantum chemical methodology applicable to complex chemical reaction networks such as those arising in cell metabolism and prebiotic chemistry. Chemical heuristics offer an expedient way of traversing high-dimensional reactive potential energy surfaces and are combined here with quantum chemical structure optimizations, which yield the structures and energies of the reaction intermediates and products. Application of heuristics-aided quantum chemical methodology to the formose reaction reproduces the experimentally observed reaction products, major reaction pathways, and autocatalytic cycles.
A well-scaling natural orbital theory
Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto
2016-11-01
Here, we introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals ofmore » the oneparticle density matrix.« less
Sanchez, Richard D.; Hothem, Larry D.
2002-01-01
High-resolution airborne and satellite image sensor systems integrated with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) may offer a quick and cost-effective way to gather accurate topographic map information without ground control or aerial triangulation. The Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing of aerial photography was used in this project to examine the positional accuracy of integrated GPS/INS for terrain mapping in Glen Canyon, Arizona. The research application in this study yielded important information on the usefulness and limits of airborne integrated GPS/INS data-capture systems for mapping.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
NASA Astrophysics Data System (ADS)
Tsalamengas, John L.
2018-07-01
We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.
Johnston, Stephen S; Salkever, David S; Ialongo, Nicholas S; Slade, Eric P; Stuart, Elizabeth A
2017-11-01
When candidates for school-based preventive interventions are heterogeneous in their risk of poor outcomes, an intervention's expected economic net benefits may be maximized by targeting candidates for whom the intervention is most likely to yield benefits, such as those at high risk of poor outcomes. Although increasing amounts of information about candidates may facilitate more accurate targeting, collecting information can be costly. We present an illustrative example to show how cost-benefit analysis results from effective intervention demonstrations can help us to assess whether improved targeting accuracy justifies the cost of collecting additional information needed to make this improvement.
Constraint on a varying proton-electron mass ratio 1.5 billion years after the big bang.
Bagdonaite, J; Ubachs, W; Murphy, M T; Whitmore, J B
2015-02-20
A molecular hydrogen absorber at a lookback time of 12.4 billion years, corresponding to 10% of the age of the Universe today, is analyzed to put a constraint on a varying proton-electron mass ratio, μ. A high resolution spectrum of the J1443+2724 quasar, which was observed with the Very Large Telescope, is used to create an accurate model of 89 Lyman and Werner band transitions whose relative frequencies are sensitive to μ, yielding a limit on the relative deviation from the current laboratory value of Δμ/μ=(-9.5 ± 5.4(stat)± 5.3(syst))×10(-6).
A well-scaling natural orbital theory
Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto
2016-01-01
We introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals of the one-particle density matrix. PMID:27803328
Estimating tar and nicotine exposure: human smoking versus machine generated smoke yields.
St Charles, F K; Kabbani, A A; Borgerding, M F
2010-02-01
Determine human smoked (HS) cigarette yields of tar and nicotine for smokers using their own brand in their everyday environment. A robust, filter analysis method was used to estimate the tar and nicotine yields for 784 subjects. Seventeen brands were chosen to represent a wide range of styles: 85 and 100 mm lengths; menthol and non-menthol; 17, 23, and 25 mm circumference; with tar yields [Federal Trade Commission (FTC) method] ranging from 1 to 18 mg. Tar bands chosen corresponded to yields of 1-3 mg, 4-6 mg, 7-12 mg, and 13+ mg. A significant difference (p<0.0001) in HS yields of tar and nicotine between tar bands was found. Machine-smoked yields were reasonable predictors of the HS yields for groups of subjects, but the relationship was neither exact nor linear. Neither the FTC, the Massachusetts (MA) nor the Canadian Intensive (CI) machine-smoking methods accurately reflect the HS yields across all brands. The FTC method was closest for the 7-12 mg and 13+ mg products and the MA method was closest for the 1-3mg products. The HS yields for the 4-6 mg products were approximately midway between the FTC and the MA yields. HS nicotine yields corresponded well with published urinary and plasma nicotine biomarker studies. 2009 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bach, Heike
1998-07-01
In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.
Serial Femtosecond Crystallography of G Protein-Coupled Receptors
Liu, Wei; Wacker, Daniel; Gati, Cornelius; Han, Gye Won; James, Daniel; Wang, Dingjie; Nelson, Garrett; Weierstall, Uwe; Katritch, Vsevolod; Barty, Anton; Zatsepin, Nadia A.; Li, Dianfan; Messerschmidt, Marc; Boutet, Sébastien; Williams, Garth J.; Koglin, Jason E.; Seibert, M. Marvin; Wang, Chong; Shah, Syed T.A.; Basu, Shibom; Fromme, Raimund; Kupitz, Christopher; Rendek, Kimberley N.; Grotjohann, Ingo; Fromme, Petra; Kirian, Richard A.; Beyerlein, Kenneth R.; White, Thomas A.; Chapman, Henry N.; Caffrey, Martin; Spence, John C.H.; Stevens, Raymond C.; Cherezov, Vadim
2014-01-01
X-ray crystallography of G protein-coupled receptors and other membrane proteins is hampered by difficulties associated with growing sufficiently large crystals that withstand radiation damage and yield high-resolution data at synchrotron sources. Here we used an x-ray free-electron laser (XFEL) with individual 50-fs duration x-ray pulses to minimize radiation damage and obtained a high-resolution room temperature structure of a human serotonin receptor using sub-10 µm microcrystals grown in a membrane mimetic matrix known as lipidic cubic phase. Compared to the structure solved by traditional microcrystallography from cryo-cooled crystals of about two orders of magnitude larger volume, the room temperature XFEL structure displays a distinct distribution of thermal motions and conformations of residues that likely more accurately represent the receptor structure and dynamics in a cellular environment. PMID:24357322
Impact resistance of fiber composites
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1982-01-01
Stress-strain curves are obtained for a variety of glass fiber and carbon fiber reinforced plastics in dynamic tension, over the stress-strain range of 0.00087-2070/sec. The test method is of the one-bar block-to-bar type, using a rotating disk or a pendulum as the loading apparatus and yielding accurate stress-strain curves up to the breaking strain. In the case of glass fiber reinforced plastic, the tensile strength, strain to peak impact stress, total strain and total absorbed energy all increase significantly as the strain rate increases. By contrast, carbon fiber reinforced plastics show lower rates of increase with strain rate. It is recommended that hybrid composites incorporating the high strength and rigidity of carbon fiber reinforced plastic with the high impact absorption of glass fiber reinforced plastics be developed for use in structures subjected to impact loading.
Variable-range-hopping magnetoresistance
NASA Astrophysics Data System (ADS)
Azbel, Mark Ya
1991-03-01
The hopping magnetoresistance R of a two-dimensional insulator with metallic impurities is considered. In sufficiently weak magnetic fields it increases or decreases depending on the impurity density n: It decreases if n is low and increases if n is high. In high magnetic fields B, it always exponentially increases with √B . Such fields yield a one-dimensional temperature dependence: lnR~1/ √T . The calculation provides an accurate leading approximation for small impurities with one eigenstate in their potential well. In the limit of infinitesimally small impurities, an impurity potential is described by a generalized function. This function, similar to a δ function, is localized at a point, but, contrary to a δ function in the dimensionality above 1, it has finite eigenenergies. Such functions may be helpful in the study of scattering and localization of any waves.
Chun, R; Glabe, C G; Fan, H
1990-01-01
Full-length (86-residue) polypeptide corresponding to the human immunodeficiency virus type 1 tat trans-activating protein was chemically synthesized on a semiautomated apparatus, using an Fmoc amino acid continuous-flow strategy. The bulk material was relatively homogeneous, as judged by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and isoelectric focusing, and it showed trans-activating activity when scrape loaded into cells containing a human immunodeficiency virus long terminal repeat-chloramphenicol acetyl-transferase reporter plasmid. Reverse-phase high-pressure liquid chromatography yielded a rather broad elution profile, and assays across the column for biological activity indicated a sharper peak. Thus, high-pressure liquid chromatography provided for enrichment of biological activity. Fast atom bombardment-mass spectrometry of tryptic digests of synthetic tat identified several of the predicted tryptic peptides, consistent with accurate chemical synthesis. Images PMID:2186178
Iverson, R.M.; ,
2003-01-01
Models that employ a fixed rheology cannot yield accurate interpretations or predictions of debris-flow motion, because the evolving behavior of debris flows is too complex to be represented by any rheological equation that uniquely relates stress and strain rate. Field observations and experimental data indicate that debris behavior can vary from nearly rigid to highly fluid as a consequence of temporal and spatial variations in pore-fluid pressure and mixture agitation. Moreover, behavior can vary if debris composition changes as a result of grain-size segregation and gain or loss of solid and fluid constituents in transit. An alternative to fixed-rheology models is provided by a Coulomb mixture theory model, which can represent variable interactions of solid and fluid constituents in heterogeneous debris-flow surges with high-friction, coarse-grained heads and low-friction, liquefied tails. ?? 2003 Millpress.
NASA Astrophysics Data System (ADS)
Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.
2017-10-01
Hamiltonian Truncation (a.k.a. Truncated Spectrum Approach) is an efficient numerical technique to solve strongly coupled QFTs in d = 2 spacetime dimensions. Further theoretical developments are needed to increase its accuracy and the range of applicability. With this goal in mind, here we present a new variant of Hamiltonian Truncation which exhibits smaller dependence on the UV cutoff than other existing implementations, and yields more accurate spectra. The key idea for achieving this consists in integrating out exactly a certain class of high energy states, which corresponds to performing renormalization at the cubic order in the interaction strength. We test the new method on the strongly coupled two-dimensional quartic scalar theory. Our work will also be useful for the future goal of extending Hamiltonian Truncation to higher dimensions d ≥ 3.
NASA Astrophysics Data System (ADS)
Ehinola, O. A.; Opoola, A. O.
2005-05-01
The Slingram electromagnetic (EM) survey using a coil separation of 60 and 100 meters was carried out in 10 villages in Akinyele area of Ibadan, southwestern Nigeria to aid in the development of groundwater. Five main rock types including an undifferentiated gneiss complex (Su), biotite-garnet schist/gneiss (Bs), quartzite and quartz schist (Q), migmatised undifferentiated biotite/hornblende gneiss (M) and pegmatite/quartz vein (P) underlie the study area. A total of 31 EM profiles was made to accurately locate prospective borehole sites in the field. Four main groups with different behavioural pattern were categorized from the EM profiles. Group 1 is characterized by high density of positive (HDP) or high density of negative (HDN) real and imaginary curves, Group 2 by parallel real and imaginary curves intersecting with negligible amplitude (PNA), Group 3 by frequent intersection of high density of negative minima (FHN) real and imaginary curves, and Group 4 by separate and approximately parallel (SAP) real and imaginary curves. Qualitative pictures of the overburden thickness and the extent of fracturing have been proposed from these behavioural patterns. A comparison of the borehole yield with the overburden thickness and the level of fracturing show that borehole yield depends more on the fracture density than on the overburden thickness. Asymmetry of the anomaly was also found useful in the determination of the inclination of the conductor/fracture.
Raymond, S B; Kumar, A T N; Boas, D A; Bacskai, B J
2012-01-01
Amyloid-β plaques are an Alzheimer’s disease biomarker which present unique challenges for near-infrared fluorescence tomography because of size (<50 μm diameter) and distribution. We used high-resolution simulations of fluorescence in a digital Alzheimer’s disease mouse model to investigate the optimal fluorophore and imaging parameters for near-infrared fluorescence tomography of amyloid plaques. Fluorescence was simulated for amyloid-targeted probes with emission at 630 and 800 nm, plaque-to-background ratios from 1–1000, amyloid burden from 0–10%, and for transmission and reflection measurement geometries. Fluorophores with high plaque-to-background contrast ratios and 800 nm emission performed significantly better than current amyloid imaging probes. We tested idealized fluorophores in transmission and full-angle tomographic measurement schemes (900 source–detector pairs), with and without anatomical priors. Transmission reconstructions demonstrated strong linear correlation with increasing amyloid burden, but underestimated fluorescence yield and suffered from localization artifacts. Full-angle measurements did not improve upon the transmission reconstruction qualitatively or in semi-quantitative measures of accuracy; anatomical and initial-value priors did improve reconstruction localization and accuracy for both transmission and full-angle schemes. Region-based reconstructions, in which the unknowns were reduced to a few distinct anatomical regions, produced highly accurate yield estimates for cortex, hippocampus and brain regions, even with a reduced number of measurements (144 source–detector pairs). PMID:19794239
High-yield, ultrafast, surface plasmon-enhanced, Au nanorod optical field electron emitter arrays.
Hobbs, Richard G; Yang, Yujia; Fallahi, Arya; Keathley, Philip D; De Leo, Eva; Kärtner, Franz X; Graves, William S; Berggren, Karl K
2014-11-25
Here we demonstrate the design, fabrication, and characterization of ultrafast, surface-plasmon enhanced Au nanorod optical field emitter arrays. We present a quantitative study of electron emission from Au nanorod arrays fabricated by high-resolution electron-beam lithography and excited by 35 fs pulses of 800 nm light. We present accurate models for both the optical field enhancement of Au nanorods within high-density arrays, and electron emission from those nanorods. We have also studied the effects of surface plasmon damping induced by metallic interface layers at the substrate/nanorod interface on near-field enhancement and electron emission. We have identified the peak optical field at which the electron emission mechanism transitions from a 3-photon absorption mechanism to strong-field tunneling emission. Moreover, we have investigated the effects of nanorod array density on nanorod charge yield, including measurement of space-charge effects. The Au nanorod photocathodes presented in this work display 100-1000 times higher conversion efficiency relative to previously reported UV triggered emission from planar Au photocathodes. Consequently, the Au nanorod arrays triggered by ultrafast pulses of 800 nm light in this work may outperform equivalent UV-triggered Au photocathodes, while also offering nanostructuring of the electron pulse produced from such a cathode, which is of interest for X-ray free-electron laser (XFEL) development where nanostructured electron pulses may facilitate more efficient and brighter XFEL radiation.
Patil, Gunvant; Do, Tuyen; Vuong, Tri D.; Valliyodan, Babu; Lee, Jeong-Dong; Chaudhary, Juhi; Shannon, J. Grover; Nguyen, Henry T.
2016-01-01
Soil salinity is a limiting factor of crop yield. The soybean is sensitive to soil salinity, and a dominant gene, Glyma03g32900 is primarily responsible for salt-tolerance. The identification of high throughput and robust markers as well as the deployment of salt-tolerant cultivars are effective approaches to minimize yield loss under saline conditions. We utilized high quality (15x) whole-genome resequencing (WGRS) on 106 diverse soybean lines and identified three major structural variants and allelic variation in the promoter and genic regions of the GmCHX1 gene. The discovery of single nucleotide polymorphisms (SNPs) associated with structural variants facilitated the design of six KASPar assays. Additionally, haplotype analysis and pedigree tracking of 93 U.S. ancestral lines were performed using publically available WGRS datasets. Identified SNP markers were validated, and a strong correlation was observed between the genotype and salt treatment phenotype (leaf scorch, chlorophyll content and Na+ accumulation) using a panel of 104 soybean lines and, an interspecific bi-parental population (F8) from PI483463 x Hutcheson. These markers precisely identified salt-tolerant/sensitive genotypes (>91%), and different structural-variants (>98%). These SNP assays, supported by accurate phenotyping, haplotype analyses and pedigree tracking information, will accelerate marker-assisted selection programs to enhance the development of salt-tolerant soybean cultivars. PMID:26781337
Hydrostatic Stress Effect On the Yield Behavior of Inconel 100
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wilson, Christopher D.
2002-01-01
Classical metal plasticity theory assumes that hydrostatic stress has no effect on the yield and postyield behavior of metals. Recent reexaminations of classical theory have revealed a significant effect of hydrostatic stress on the yield behavior of notched geometries. New experiments and nonlinear finite element analyses (FEA) of Inconel 100 (IN 100) equal-arm bend and double-edge notch tension (DENT) test specimens have revealed the effect of internal hydrostatic tensile stresses on yielding. Nonlinear FEA using the von Mises (yielding is independent of hydrostatic stress) and the Drucker-Prager (yielding is linearly dependent on hydrostatic stress) yield functions was performed. In all test cases, the von Mises constitutive model, which is independent of hydrostatic pressure, overestimated the load for a given displacement or strain. Considering the failure displacements or strains, the Drucker-Prager FEMs predicted loads that were 3% to 5% lower than the von Mises values. For the failure loads, the Drucker Prager FEMs predicted strains that were 20% to 35% greater than the von Mises values. The Drucker-Prager yield function seems to more accurately predict the overall specimen response of geometries with significant internal hydrostatic stress influence.
NASA Technical Reports Server (NTRS)
Jaffe, Richard L.; Pattengill, Merle D.; Schwenke, David W.
1989-01-01
Strategies for constructing global potential energy surfaces from a limited number of accurate ab initio electronic energy calculations are discussed. Generally, these data are concentrated in small regions of configuration space (e.g., in the vicinity of saddle points and energy minima) and difficulties arise in generating a potential function that is globally well-behaved. Efficient computer codes for carrying out classical trajectory calculations on vector and parallel processors are also described. Illustrations are given from recent work on the following chemical systems: Ca + HF yields CaF + H, H + H + H2 yields H2 + H2, N + O2 yields NO + O and O + N2 yields NO + N. The dynamics and kinetics of metathesis, dissociation, recombination, energy transfer and complex formation processes will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nekkab, M., E-mail: mohammed-nekkab@yahoo.com; LESIMS laboratory, Physics Department, Faculty of Sciences, University of Setif 1, 19000 Setif; Kahoul, A.
The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ω{sub K}) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ω{sub K}) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ω{sub k}/(1−ω{sub k})){sup 1/q} (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the resultsmore » of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.« less
Matter under extreme conditions experiments at the Linac Coherent Light Source
Glenzer, S. H.; Fletcher, L. B.; Galtier, E.; ...
2015-12-10
The Matter in Extreme Conditions end station at the Linac Coherent Light Source (LCLS) is a new tool enabling accurate pump-probe measurements for studying the physical properties of matter in the high-energy density physics regime. This instrument combines the world’s brightest x-ray source, the LCLS x-ray beam, with high-power lasers consisting of two nanosecond Nd:glass laser beams and one short-pulse Ti:sapphire laser. These lasers produce short-lived states of matter with high pressures, high temperatures or high densities with properties that are important for applications in nuclear fusion research, laboratory astrophysics and the development of intense radiation sources. In the firstmore » experiments, we have performed highly accurate x-ray diffraction and x-ray Thomson scattering techniques on shock-compressed matter resolving the transition from compressed solid matter to a co-existence regime and into the warm dense matter state. Furthermore, these complex charged-particle systems are dominated by strong correlations and quantum effects. They exist in planetary interiors and laboratory experiments, e.g., during high-power laser interactions with solids or the compression phase of inertial confinement fusion implosions. Applying record peak brightness X rays resolves the ionic interactions at atomic (Ångstrom) scale lengths and measure the static structure factor, which is a key quantity for determining equation of state data and important transport coefficients. Simultaneously, spectrally resolved measurements of plasmon features provide dynamic structure factor information that yield temperature and density with unprecedented precision at micron-scale resolution in dynamic compression experiments. This set of studies demonstrates our ability to measure fundamental thermodynamic properties that determine the state of matter in the high-energy density physics regime.« less
Characterization of integrated optical CD for process control
NASA Astrophysics Data System (ADS)
Yu, Jackie; Uchida, Junichi; van Dommelen, Youri; Carpaij, Rene; Cheng, Shaunee; Pollentier, Ivan; Viswanathan, Anita; Lane, Lawrence; Barry, Kelly A.; Jakatdar, Nickhil
2004-05-01
The accurate measurement of CD (critical dimension) and its application to inline process control are key challenges for high yield and OEE (overall equipment efficiency) in semiconductor production. CD-SEM metrology, although providing the resolution necessary for CD evaluation, suffers from the well-known effect of resist shrinkage, making accuracy and stability of the measurements an issue. For sub-100 nm in-line process control, where accuracy and stability as well as speed are required, CD-SEM metrology faces serious limitations. In contrast, scatterometry, using broadband optical spectra taken from grating structures, does not suffer from such limitations. This technology is non-destructive and, in addition to CD, provides profile information and film thickness in a single measurement. Using Timbre's Optical Digital Profililometry (ODP) technology, we characterized the Process Window, using a iODP101 integrated optical CD metrology into a TEL Clean Track at IMEC. We demonstrate the Optical CD's high sensitivity to process change and its insensitivity to measurement noise. We demonstrate the validity of ODP modeling by showing its accurate response to known process changes built into the evaluation and its excellent correlation to CD-SEM. We will further discuss the intrinsic Optical CD metrology factors that affect the tool precision, accuracy and its correlation to CD-SEM.
NASA Astrophysics Data System (ADS)
Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip
2016-11-01
Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Switchgrass leaf area index and light extinction coefficients
USDA-ARS?s Scientific Manuscript database
Biomass production simulation modeling for plant species is often dependent upon accurate simulation or measurement of canopy light interception and radiation use efficiency. With the recent interest in converting large tracts of land to biofuel species cropping, modeling vegetative yield with grea...
NASA Astrophysics Data System (ADS)
Zwitter, T.; Kos, J.; Žerjal, M.; Traven, G.
2016-10-01
Current ongoing stellar spectroscopic surveys (RAVE, GALAH, Gaia-ESO, LAMOST, APOGEE, Gaia) are mostly devoted to studying Galactic archaeology and the structure of the Galaxy. But they allow also for important auxiliary science: (i) the Galactic interstellar medium can be studied in four dimensions (position in space plus radial velocity) through weak but numerous diffuse interstellar bands and atomic absorptions seen in spectra of background stars, (ii) emission spectra which are quite frequent even in field stars can serve as a good indicator of their youth, pointing e.g. to stars recently ejected from young stellar environments, (iii) an astrometric solution of the photocenter of a binary to be obtained by Gaia can yield accurate masses when joined by spectroscopic information obtained serendipitously during a survey. These points are illustrated by first results from the first three surveys mentioned above. These hint at the near future: spectroscopic studies of the dynamics of the interstellar medium can identify and quantify Galactic fountains which may sustain star formation in the disk by entraining fresh gas from the halo; RAVE already provided a list of ˜ 14,000 field stars with chromospheric emission in Ca II lines, to be supplemented by many more observations by Gaia in the same band, and by GALAH and Gaia-ESO observations of Balmer lines; several millions of astrometric binaries with periods up to a few years which are being observed by Gaia can yield accurate masses when supplemented with measurements from only a few high-quality ground based spectra.
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
Refinement procedure for the image alignment in high-resolution electron tomography.
Houben, L; Bar Sadan, M
2011-01-01
High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.
Chemical Visualization of Sweat Pores in Fingerprints Using GO-Enhanced TOF-SIMS.
Cai, Lesi; Xia, Meng-Chan; Wang, Zhaoying; Zhao, Ya-Bin; Li, Zhanping; Zhang, Sichun; Zhang, Xinrong
2017-08-15
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) has been used in imaging of small molecules (<500 Da) in fingerprints, such as gunshot residues and illicit drugs. However, identifying and mapping relatively high mass molecules are quite difficult owing to insufficient ion yield of their molecular ions. In this report, graphene oxide (GO)-enhanced TOF-SIMS was used to detect and image relatively high mass molecules such as poison, alkaloids (>600 Da) and controlled drugs, and antibiotics (>700 Da) in fingerprints. Detail features of fingerprints such as the number and distribution of sweat pores in a ridge and even the delicate morphology of one pore were clearly revealed in SIMS images of relatively high mass molecules. The detail features combining with identified chemical composition were sufficient to establish a human identity and link the suspect to a crime scene. The wide detectable mass range and high spatial resolution make GO-enhanced TOF-SIMS a promising tool in accurate and fast analysis of fingerprints, especially in fragmental fingerprint analysis.
Thomas, Michael; Corry, Ben
2016-01-01
Membranes made from nanomaterials such as nanotubes and graphene have been suggested to have a range of applications in water filtration and desalination, but determining their suitability for these purposes requires an accurate assessment of the properties of these novel materials. In this study, we use molecular dynamics simulations to determine the permeability and salt rejection capabilities for membranes incorporating carbon nanotubes (CNTs) at a range of pore sizes, pressures and concentrations. We include the influence of osmotic gradients and concentration build up and simulate at realistic pressures to improve the reliability of estimated membrane transport properties. We find that salt rejection is highly dependent on the applied hydrostatic pressure, meaning high rejection can be achieved with wider tubes than previously thought; while membrane permeability depends on salt concentration. The ideal size of the CNTs for desalination applications yielding high permeability and high salt rejection is found to be around 1.1 nm diameter. While there are limited energy gains to be achieved in using ultra-permeable CNT membranes in desalination by reverse osmosis, such membranes may allow for smaller plants to be built as is required when size or weight must be minimized. There are diminishing returns in further increasing membrane permeability, so efforts should focus on the fabrication of membranes containing narrow or functionalized CNTs that yield the desired rejection or selection properties rather than trying to optimize pore densities. PMID:26712639
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Simulating large-scale crop yield by using perturbed-parameter ensemble method
NASA Astrophysics Data System (ADS)
Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.
2010-12-01
Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.
An equation of state for high pressure-temperature liquids (RTpress) with application to MgSiO3 melt
NASA Astrophysics Data System (ADS)
Wolf, Aaron S.; Bower, Dan J.
2018-05-01
The thermophysical properties of molten silicates at extreme conditions are crucial for understanding the early evolution of Earth and other massive rocky planets, which is marked by giant impacts capable of producing deep magma oceans. Cooling and crystallization of molten mantles are sensitive to the densities and adiabatic profiles of high-pressure molten silicates, demanding accurate Equation of State (EOS) models to predict the early evolution of planetary interiors. Unfortunately, EOS modeling for liquids at high P-T conditions is difficult due to constantly evolving liquid structure. The Rosenfeld-Tarazona (RT) model provides a physically sensible and accurate description of liquids but is limited to constant volume heating paths (Rosenfeld and Tarazona, 1998). We develop a high P-T EOS for liquids, called RTpress, which uses a generalized Rosenfeld-Tarazona model as a thermal perturbation to isothermal and adiabatic reference compression curves. This approach provides a thermodynamically consistent EOS which remains accurate over a large P-T range and depends on a limited number of physically meaningful parameters that can be determined empirically from either simulated or experimental datasets. As a first application, we model MgSiO3 melt representing a simplified rocky mantle chemistry. The model parameters are fitted to the MD simulations of both Spera et al. (2011) and de Koker and Stixrude (2009), recovering pressures, volumes, and internal energies to within 0.6 GPa, 0.1 Å3 , and 6 meV per atom on average (for the higher resolution data set), as well as accurately predicting liquid densities and temperatures from shock-wave experiments on MgSiO3 glass. The fitted EOS is used to determine adiabatic thermal profiles, revealing the approximate thermal structure of a fully molten magma ocean like that of the early Earth. These adiabats, which are in strong agreement for both fitted models, are shown to be sufficiently steep to produce either a center-outwards or bottom-up style of crystallization, depending on the curvature of the mantle melting curve (liquidus), with a high-curvature model yielding crystallization at depths of roughly 80 GPa (Stixrude et al., 2009) whereas a nearly-flat experimentally determined liquidus implies bottom-up crystallization (Andrault et al., 2011).
NASA Technical Reports Server (NTRS)
Durand, Jean-Louis; Delusca, Kenel; Boote, Ken; Lizaso, Jon; Manderscheid, Remy; Weigel, Hans Johachim; Ruane, Alexander Clark; Rosenzweig, Cynthia E.; Jones, Jim; Ahuja, Laj;
2017-01-01
This study assesses the ability of 21 crop models to capture the impact of elevated CO2 concentration [CO2] on maize yield and water use as measured in a 2-year Free Air Carbon dioxide Enrichment experiment conducted at the Thunen Institute in Braunschweig, Germany (Manderscheid et al. 2014). Data for ambient [CO2] and irrigated treatments were provided to the 21 models for calibrating plant traits, including weather, soil and management data as well as yield, grain number, above ground biomass, leaf area index, nitrogen concentration in biomass and grain, water use and soil water content. Models differed in their representation of carbon assimilation and evapotranspiration processes. The models reproduced the absence of yield response to elevated [CO2] under well-watered conditions, as well as the impact of water deficit at ambient [CO2], with 50 percent of models within a range of plus/minus 1 Mg ha(exp. -1) around the mean. The bias of the median of the 21 models was less than 1 Mg ha(exp. -1). However under water deficit in one of the two years, the models captured only 30 percent of the exceptionally high [CO2] enhancement on yield observed. Furthermore the ensemble of models was unable to simulate the very low soil water content at anthesis and the increase of soil water and grain number brought about by the elevated [CO2] under dry conditions. Overall, we found models with explicit stomatal control on transpiration tended to perform better. Our results highlight the need for model improvement with respect to simulating transpirational water use and its impact on water status during the kernel-set phase.
Dyhdalo, Kathryn; Macnamara, Stephen; Brainard, Jennifer; Underwood, Dawn; Tubbs, Raymond; Yang, Bin
2014-02-01
BRAF mutation V600E (substitution Val600Glu) is a molecular signature for papillary thyroid carcinoma (PTC). Testing for BRAF mutation is clinically useful in providing prognostic prediction and facilitating accurate diagnosis of PTC in thyroid fine-needle aspirate (FNA) samples. This study assessed the correlation of cellularity with DNA yield and compared 2 technical platforms with different sensitivities in detection of BRAF mutation in cytologic specimens. Cellularity was evaluated based on groups of 10+ cells on a ThinPrep slide: 1+ (1-5 groups), 2+ (6-10 groups), 3+ (11-20 groups), and 4+ (> 20 groups). Genomic DNA was extracted from residual materials of thyroid FNAs after cytologic diagnosis. Approximately 49% of thyroid FNA samples had low cellularity (1-2+). DNA yield is proportionate with increased cellularity and increased nearly 4-fold from 1+ to 4+ cellularity in cytologic samples. When applied to BRAF mutational assay, using a cutoff of 6 groups of follicular cells with 10+ cells per group, 96.7% of cases yielded enough DNA for at least one testing for BRAF mutation. Five specimens (11.6%) with lower cellularity did not yield sufficient DNA for duplicate testing. Comparison of Sanger sequencing to allele-specific polymerase chain reaction methods shows the latter confers better sensitivity in detection of BRAF mutation, especially in limited cytologic specimens with a lower percentage of malignant cells. This study demonstrates that by using 6 groups of 10+ follicular cells as a cutoff, nearly 97% of thyroid FNA samples contain enough DNA for BRAF mutational assay. Careful selection of a molecular testing system with high sensitivity facilitates the successful conduction of molecular testing in limited cytologic specimens. Cancer (Cancer Cytopathol) 2014;122:114-22 © 2013 American Cancer Society. © 2013 American Cancer Society.
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
Constitutive Modeling of Piezoelectric Polymer Composites
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Tom (Technical Monitor)
2003-01-01
A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.
Dhiman, Neelam; Hall, Leslie; Wohlfiel, Sherri L; Buckwalter, Seanne P; Wengenack, Nancy L
2011-04-01
Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry was compared to phenotypic testing for yeast identification. MALDI-TOF mass spectrometry yielded 96.3% and 84.5% accurate species level identifications (spectral scores, ≥ 1.8) for 138 common and 103 archived strains of yeast. MALDI-TOF mass spectrometry is accurate, rapid (5.1 min of hands-on time/identification), and cost-effective ($0.50/sample) for yeast identification in the clinical laboratory.
Forecasting volcanic air pollution in Hawaii: Tests of time series models
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2012-12-01
Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; ...
2017-04-13
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.
Empirical trials of plant field guides.
Hawthorne, W D; Cable, S; Marshall, C A M
2014-06-01
We designed 3 image-based field guides to tropical forest plant species in Ghana, Grenada, and Cameroon and tested them with 1095 local residents and 20 botanists in the United Kingdom. We compared users' identification accuracy with different image formats, including drawings, specimen photos, living plant photos, and paintings. We compared users' accuracy with the guides to their accuracy with only their prior knowledge of the flora. We asked respondents to score each format for usability, beauty, and how much they would pay for it. Prior knowledge of plant names was generally low (<22%). With a few exceptions, identification accuracy did not differ significantly among image formats. In Cameroon, users identifying sterile Cola species achieved 46-56% accuracy across formats; identification was most accurate with living plant photos. Botanists in the United Kingdom accurately identified 82-93% of the same Cameroonian species; identification was most accurate with specimens. In Grenada, users accurately identified 74-82% of plants; drawings yielded significantly less accurate identifications than paintings and photos of living plants. In Ghana, users accurately identified 85% of plants. Digital color photos of living plants ranked high for beauty, usability, and what users would pay. Black and white drawings ranked low. Our results show the potential and limitations of the use of field guides and nonspecialists to identify plants, for example, in conservation applications. We recommend authors of plant field guides use the cheapest or easiest illustration format because image type had limited bearing on accuracy; match the type of illustration to the most likely use of the guide for slight improvements in accuracy; avoid black and white formats unless the audience is experienced at interpreting illustrations or keeping costs low is imperative; discourage false-positive identifications, which were common; and encourage users to ask an expert or use a herbarium for groups that are difficult to identify. © 2014 Society for Conservation Biology.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...
2016-02-11
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Application of JAERI quantum molecular dynamics model for collisions of heavy nuclei
NASA Astrophysics Data System (ADS)
Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji
2016-06-01
The quantum molecular dynamics (QMD) model incorporated into the general-purpose radiation transport code PHITS was revised for accurate prediction of fragment yields in peripheral collisions. For more accurate simulation of peripheral collisions, stability of the nuclei at their ground state was improved and the algorithm to reject invalid events was modified. In-medium correction on nucleon-nucleon cross sections was also considered. To clarify the effect of this improvement on fragmentation of heavy nuclei, the new QMD model coupled with a statistical decay model was used to calculate fragment production cross sections of Ag and Au targets and compared with the data of earlier measurement. It is shown that the revised version can predict cross section more accurately.
NASA Astrophysics Data System (ADS)
Zhu, Hao; Fan, Jiangli; Mu, Huiying; Zhu, Tao; Zhang, Zhen; Du, Jianjun; Peng, Xiaojun
2016-10-01
Polarity-sensitive fluorescent probes are powerful chemical tools for studying biomolecular structures and activities both in vitro and in vivo. However, the lack of “off-on” polarity-sensing probes has limited the accurate monitoring of biological processes that involve an increase in local hydrophilicity. Here, we design and synthesize a series of “off-on” polarity-sensitive fluorescent probes BP series consisting of the difluoroboron dippyomethene (BODIPY) fluorophore connected to a quaternary ammonium moiety via different carbon linkers. All these probes showed low fluorescence quantum yields in nonpolar solution but became highly fluorescent in polar media. BP-2, which contains a two-carbon linker and a trimethyl quaternary ammonium, displayed a fluorescence intensity and quantum yield that were both linearly correlated with solvent polarity. In addition, BP-2 exhibited high sensitivity and selectivity for polarity over other environmental factors and a variety of biologically relevant species. BP-2 can be synthesized readily via an unusual Mannich reaction followed by methylation. Using electrochemistry combined with theoretical calculations, we demonstrated that the “off-on” sensing behavior of BP-2 is primarily due to the polarity-dependent donor-excited photoinduced electron transfer (d-PET) effect. Live-cell imaging established that BP-2 enables the detection of local hydrophilicity within lysosomes under conditions of lysosomal dysfunction.
Lee, Shang-Hsuan; Sato, Yusuke; Hyodo, Mamoru; Harashima, Hideyoshi
2016-01-01
The surface topology of ligands on liposomes is an important factor in active targeting in drug delivery systems. Accurately evaluating the density of anchors and bioactive functional ligands on a liposomal surface is critical for ensuring the efficient delivery of liposomes. For evaluating surface ligand density, it is necessary to clarify that on the ligand-modified liposomal surfaces, some anchors are attached to ligands but some are not. To distinguish between these situations, a key parameter, surface anchor density, was introduced to specify amount of total anchors on the liposomal surface. Second, the parameter reaction yield was introduced to identify the amount of ligand-attached anchors among total anchors, since the conjugation efficiency is not always the same nor 100%. Combining these independent parameters, we derived: incorporation ratio=surface anchor density×reaction yield. The term incorporation ratio defines the surface ligand density. Since the surface anchor density represents the density of polyethylene glycol (PEG) on the surfaces in most cases, it also determines liposomal function. It is possible to accurately characterize various PEG and ligand densities and to define the surface topologies. In conclusion, this quantitative methodology can standardize the liposome preparation process and qualify the modified liposomal surfaces.
New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.
Multi-scale Modeling of Plasticity in Tantalum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less
NASA Technical Reports Server (NTRS)
Price, Kevin P.; Nellis, M. Duane
1996-01-01
The purpose of this project was to develop a practical protocol that employs multitemporal remotely sensed imagery, integrated with environmental parameters to model and monitor agricultural and natural resources in the High Plains Region of the United States. The value of this project would be extended throughout the region via workshops targeted at carefully selected audiences and designed to transfer remote sensing technology and the methods and applications developed. Implementation of such a protocol using remotely sensed satellite imagery is critical for addressing many issues of regional importance, including: (1) Prediction of rural land use/land cover (LULC) categories within a region; (2) Use of rural LULC maps for successive years to monitor change; (3) Crop types derived from LULC maps as important inputs to water consumption models; (4) Early prediction of crop yields; (5) Multi-date maps of crop types to monitor patterns related to crop change; (6) Knowledge of crop types to monitor condition and improve prediction of crop yield; (7) More precise models of crop types and conditions to improve agricultural economic forecasts; (8;) Prediction of biomass for estimating vegetation production, soil protection from erosion forces, nonpoint source pollution, wildlife habitat quality and other related factors; (9) Crop type and condition information to more accurately predict production of biogeochemicals such as CO2, CH4, and other greenhouse gases that are inputs to global climate models; (10) Provide information regarding limiting factors (i.e., economic constraints of pumping, fertilizing, etc.) used in conjunction with other factors, such as changes in climate for predicting changes in rural LULC; (11) Accurate prediction of rural LULC used to assess the effectiveness of government programs such as the U.S. Soil Conservation Service (SCS) Conservation Reserve Program; and (12) Prediction of water demand based on rural LULC that can be related to rates of draw-down of underground water supplies.
Purified enzymes improve isolation and characterization of the adult thymic epithelium.
Seach, Natalie; Wong, Kahlia; Hammett, Maree; Boyd, Richard L; Chidgey, Ann P
2012-11-30
The reproducible isolation and accurate characterization of thymic epithelial cell (TEC) subsets is of critical importance to the ongoing study of thymopoiesis and its functional decline with age. The study of adult TEC, however, is significantly hampered due to the severely low stromal to hematopoietic cell ratio. Non-biased digestion and enrichment protocols are thus essential to ensure optimal cell yield and accurate representation of stromal subsets, as close as possible to their in vivo representation. Current digestion protocols predominantly involve diverse, relatively impure enzymatic variants of crude collagenase and collagenase/dispase (col/disp) preparations, which have variable efficacy and are often suboptimal in their ability to mediate complete digestion of thymus tissue. To address these issues we compared traditional col/disp preparations with the latest panel of Liberase products that contain a blend of highly purified collagenase and neutral protease enzymes. Liberase enzymes revealed a more rapid, complete dissociation of thymus tissue; minimizing loss of viability and increasing recovery of thymic stromal cell (TSC) elements. In particular, the recovery and viability of TEC, notably the rare cortical subsets, were significantly enhanced with Liberase products containing medium to high levels of thermolysin. The improved stromal dissociation led to numerically increased TEC yield and total TEC RNA isolated from pooled digests of adult thymus. Furthermore, the increased recovery of TEC enhanced resolution and quantification of TEC subsets in both adult and aged mice, facilitating flow cytometric analysis on a per thymus basis. We further refined the adult TEC phenotype by correlating surface expression of known TEC markers, with expression of intracellular epithelial lineage markers, Keratin 5 and Keratin 8. The data reveal more extensive expression of K8 than previously recognized and indicates considerable heterogeneity still exists within currently defined adult TEC subsets. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Improta, L.; Operto, S.; Piromallo, C.; Valoroso, L.
2008-12-01
The Agri Valley is a Quaternary extensional basin located in the Southern Apennines range. This basin was struck by a M7 earthquake in 1857. In spite of extensive morphotectonic surveys and hydrocarbon exploration, major unsolved questions remain about the upper crustal structure, the recent tectonic evolution and seismotectonics of the area. Most authors consider a SW-dipping normal-fault system bordering the basin to the East as the major seismogenic source. Alternatively, some authors ascribe the high seismogenic potential of the region to NE-dipping normal faults identified by morphotectonic surveys along the ridge bounding the basin to the West. These uncertainties mainly derive from the poor performance of commercial reflection profiling that suffers from an extreme structural complexity and unfavorable near-surface conditions. To overcome these drawbacks, ENI and Shell Italia carried out a non-conventional wide-aperture survey with densely spaced sources (60 m) and receivers (90 m). The 18-km-long wide-aperture profile crosses the basin, yielding a unique opportunity to get new insights into the crustal structure by using advanced imaging techniques. Here, we apply a two-step imaging procedure. We start determining multi- scale Vp images down to 2.5 km depth by using a non-linear traveltime tomographic technique able to cope with strongly heterogeneous media. Assessment of an accurate reference Vp model is indeed crucial for the subsequent application of a frequency-domain full-waveform inversion aimed at improving spatial resolution of the velocity images. Frequency components of the data are then iteratively inverted from low to high frequency values in order to progressively incorporate smaller wavelength components into the model. Inversion results accurately image the shallow crust, yielding valuable constraints for a better understanding of the recent basin evolution and of the surrounding normal-fault systems.
UAV-Based Hyperspectral Remote Sensing for Precision Agriculture: Challenges and Opportunities
NASA Astrophysics Data System (ADS)
Angel, Y.; Parkes, S. D.; Turner, D.; Houborg, R.; Lucieer, A.; McCabe, M.
2017-12-01
Modern agricultural production relies on monitoring crop status by observing and measuring variables such as soil condition, plant health, fertilizer and pesticide effect, irrigation and crop yield. Managing all of these factors is a considerable challenge for crop producers. As such, providing integrated technological solutions that enable improved diagnostics of field condition to maximize profits, while minimizing environmental impacts, would be of much interest. Such challenges can be addressed by implementing remote sensing systems such as hyperspectral imaging to produce precise biophysical indicator maps across the various cycles of crop development. Recent progress in unmanned aerial vehicles (UAVs) have advanced traditional satellite-based capabilities, providing a capacity for high-spatial, spectral and temporal response. However, while some hyperspectral sensors have been developed for use onboard UAVs, significant investment is required to develop a system and data processing workflow that retrieves accurately georeferenced mosaics. Here we explore the use of a pushbroom hyperspectral camera that is integrated on-board a multi-rotor UAV system to measure the surface reflectance in 272 distinct spectral bands across a wavelengths range spanning 400-1000 nm, and outline the requirement for sensor calibration, integration onto a stable UAV platform enabling accurate positional data, flight planning, and development of data post-processing workflows for georeferenced mosaics. The provision of high-quality and geo-corrected imagery facilitates the development of metrics of vegetation health that can be used to identify potential problems such as production inefficiencies, diseases and nutrient deficiencies and other data-streams to enable improved crop management. Immense opportunities remain to be exploited in the implementation of UAV-based hyperspectral sensing (and its combination with other imaging systems) to provide a transferable and scalable integrated framework for crop growth monitoring and yield prediction. Here we explore some of the challenges and issues in translating the available technological capacity into a useful and useable image collection and processing flow-path that enables these potential applications to be better realized.
Improved sample management in the cylindrical-tube microelectrophoresis method
NASA Technical Reports Server (NTRS)
Smolka, A. J. K.
1980-01-01
A modification to an analytical microelectrophoresis system is described that improves the manipulation of the sample particles and fluid. The apparatus modification and improved operational procedure should yield more accurate measurements of particle mobilities and permit less skilled operators to use the apparatus.
Automated image analysis of the severity of foliar citrus canker symptoms
USDA-ARS?s Scientific Manuscript database
Citrus canker (caused by Xanthomonas citri subsp. citri) is a destructive disease, reducing yield, and rendering fruit unfit for fresh sale. Accurate assessment of citrus canker severity and other diseases is needed for several purposes, including monitoring epidemics and evaluation of germplasm. ...
Water and wastewater infrastructure systems represent a major capital investment; utilities must ensure they are getting the highest yield possible on their investment, both in terms of dollars and water quality. Accurate information related to equipment, pipe characteristics, l...
Water and wastewater infrastructure systems represent a major capital investment; utilities must ensure they are getting the highest yield possible on their investment, both in terms of dollars and water quality. Accurate information related to equipment, pipe characteristics, lo...
Young's moduli of carbon materials investigated by various classical molecular dynamics schemes
NASA Astrophysics Data System (ADS)
Gayk, Florian; Ehrens, Julian; Heitmann, Tjark; Vorndamme, Patrick; Mrugalla, Andreas; Schnack, Jürgen
2018-05-01
For many applications classical carbon potentials together with classical molecular dynamics are employed to calculate structures and physical properties of such carbon-based materials where quantum mechanical methods fail either due to the excessive size, irregular structure or long-time dynamics. Although such potentials, as for instance implemented in LAMMPS, yield reasonably accurate bond lengths and angles for several carbon materials such as graphene, it is not clear how accurate they are in terms of mechanical properties such as for instance Young's moduli. We performed large-scale classical molecular dynamics investigations of three carbon-based materials using the various potentials implemented in LAMMPS as well as the EDIP potential of Marks. We show how the Young's moduli vary with classical potentials and compare to experimental results. Since classical descriptions of carbon are bound to be approximations it is not astonishing that different realizations yield differing results. One should therefore carefully check for which observables a certain potential is suited. Our aim is to contribute to such a clarification.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...
2016-06-23
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Calculations of separated 3-D flows with a pressure-staggered Navier-Stokes equations solver
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
A Navier-Stokes equations solver based on a pressure correction method with a pressure-staggered mesh and calculations of separated three-dimensional flows are presented. It is shown that the velocity pressure decoupling, which occurs when various pressure correction algorithms are used for pressure-staggered meshes, is caused by the ill-conditioned discrete pressure correction equation. The use of a partial differential equation for the incremental pressure eliminates the velocity pressure decoupling mechanism by itself and yields accurate numerical results. Example flows considered are a three-dimensional lid driven cavity flow and a laminar flow through a 90 degree bend square duct. For the lid driven cavity flow, the present numerical results compare more favorably with the measured data than those obtained using a formally third order accurate quadratic upwind interpolation scheme. For the curved duct flow, the present numerical method yields a grid independent solution with a very small number of grid points. The calculated velocity profiles are in good agreement with the measured data.
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Liu, Youhua
2000-01-01
At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.
Evaluation of the pulse-contour method of determining stroke volume in man.
NASA Technical Reports Server (NTRS)
Alderman, E. L.; Branzi, A.; Sanders, W.; Brown, B. W.; Harrison, D. C.
1972-01-01
The pulse-contour method for determining stroke volume has been employed as a continuous rapid method of monitoring the cardiovascular status of patients. Twenty-one patients with ischemic heart disease and 21 patients with mitral valve disease were subjected to a variety of hemodynamic interventions. The pulse-contour estimations, using three different formulas derived by Warner, Kouchoukos, and Herd, were compared with indicator-dilution outputs. A comparison of the results of the two methods for determining stroke volume yielded correlation coefficients ranging from 0.59 to 0.84. The better performing Warner formula yielded a coefficient of variation of about 20%. The type of hemodynamic interventions employed did not significantly affect the results using the pulse-contour method. Although the correlation of the pulse-contour and indicator-dilution stroke volumes is high, the coefficient of variation is such that small changes in stroke volume cannot be accurately assessed by the pulse-contour method. However, the simplicity and rapidity of this method compared to determination of cardiac output by Fick or indicator-dilution methods makes it a potentially useful adjunct for monitoring critically ill patients.
Spatially averaged flow over a wavy boundary revisited
McLean, S.R.; Wolfe, S.R.; Nelson, J.M.
1999-01-01
Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.
Experimental Determination of DT Yield in High Current DD Dense Plasma Focii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, D. R.; Hagen, E. C.; Meehan, B. T.
2013-06-18
Dense Plasma Focii (DPF), which utilize deuterium gas to produce 2.45 MeV neutrons, may in fact also produce DT fusion neutrons at 14.1 MeV due to the triton production in the DD reaction. If beam-target fusion is the primary producer of fusion neutrons in DPFs, it is possible that ejected tritons from the first pinch will interact with the second pinch, and so forth. The 2 MJ DPF at National Security Technologies’ Losee Road Facility is able to, and has produced, over 1E12 DD neutrons per pulse, allowing an accurate measurement of the DT/DD ratio. The DT/DD ratio was experimentallymore » verified by using the (n,2n) reaction in a large piece of praseodymium metal, which has a threshold reaction of 8 MeV, and is widely used as a DT yield measurement system1. The DT/DD ratio was experimentally determined for over 100 shots, and then compared to independent variables such as tube pressure, number of pinches per shot, total current, pinch current and charge voltage.« less
NASA Technical Reports Server (NTRS)
Freedman, M. I.; Sipcic, S.; Tseng, K.
1985-01-01
A frequency domain Green's Function Method for unsteady supersonic potential flow around complex aircraft configurations is presented. The focus is on the supersonic range wherein the linear potential flow assumption is valid. In this range the effects of the nonlinear terms in the unsteady supersonic compressible velocity potential equation are negligible and therefore these terms will be omitted. The Green's function method is employed in order to convert the potential flow differential equation into an integral one. This integral equation is then discretized, through standard finite element technique, to yield a linear algebraic system of equations relating the unknown potential to its prescribed co-normalwash (boundary condition) on the surface of the aircraft. The arbitrary complex aircraft configuration (e.g., finite-thickness wing, wing-body-tail) is discretized into hyperboloidal (twisted quadrilateral) panels. The potential and co-normalwash are assumed to vary linearly within each panel. The long range goal is to develop a comprehensive theory for unsteady supersonic potential aerodynamic which is capable of yielding accurate results even in the low supersonic (i.e., high transonic) range.
NASA Astrophysics Data System (ADS)
Sjouwerman, Loránt O.; Pihlström, Ylva M.; Rich, R. Michael; Morris, Mark R.; Claussen, Mark J.
2017-01-01
A radio survey of red giant SiO sources in the inner Galaxy and bulge is not hindered by extinction. Accurate stellar velocities (<1 km/s) are obtained with minimal observing time (<1 min) per source. Detecting over 20,000 SiO maser sources yields data comparable to optical surveys with the additional strength of a much more thorough coverage of the highly obscured inner Galaxy. Modeling of such a large sample would reveal dynamical structures and minority populations; the velocity structure can be compared to kinematic structures seen in molecular gas, complex orbit structure in the bar, or stellar streams resulting from recently infallen systems. Our Bulge Asymmetries and Dynamic Evolution (BAaDE) survey yields bright SiO masers suitable for follow-up Galactic orbit and parallax determination using VLBI. Here we outline our early VLA observations at 43 GHz in the northern bulge and Galactic plane (0
Measurement of tropospheric OH and HO2 by laser-induced fluorescence at low pressure
NASA Technical Reports Server (NTRS)
Stevens, P. S.; Mather, J. H.; Brune, W. H.
1994-01-01
The hydroxyl radical (OH) is the primary oxidant in the atmosphere, responsible for many photochemical reactions that affect both regional air quality and global climate change. Because of its high reactivity, abundances of OH in the troposphere are less than 1 part per trillion by volume (pptv) and thus difficult to measure accurately. This paper describes an instrument for the sensitive detection of OH in the troposphere using low-pressure laser-induced fluorescence. Ambient air is expanded into a low pressure detection chamber, and OH is both excited and detected using the A(sup 2) Epsilon(+)(v prime = 0) yields X(sup 2)Pi(v double prime = 0) transition near 308 nm. An injector upstream of the detection axis allows for the addition of reagent NO to convert ambient HO2 to OH using the fast reaction HO2 + NO yields OH + NO2. Using recent advances in laser and detector technologies, this prototype instrument is able to detect less than 1 x 10(exp 5) molecules/cu cm (0.004 pptv) of OH with an integration time of 30 s with negligible interferences.
A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Cerjan, C. J.; Marinak, M. M.
A detailed simulation-based model of the June 2011 National Ignition Campaign cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60. Simulatedmore » experimental values were extracted from the simulation and compared against the experiment. Although by design the model is able to reproduce the 1D in-flight implosion parameters and low-mode asymmetries, it is not able to accurately predict the measured and inferred stagnation properties and levels of mix. In particular, the measured yields were 15%-40% of the calculated yields, and the inferred stagnation pressure is about 3 times lower than simulated.« less
Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C
2004-07-01
This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value-based marketing system.
Alemu, A W; Vyas, D; Manafiazar, G; Basarab, J A; Beauchemin, K A
2017-08-01
The objectives of this study were to evaluate the relationship between residual feed intake (RFI; g/d) and enteric methane (CH) production (g/kg DM) and to compare CH and carbon dioxide (CO) emissions measured using respiration chambers (RC) and the GreenFeed emission monitoring (GEM) system (C-Lock Inc., Rapid City, SD). A total of 98 crossbred replacement heifers were group housed in 2 pens and fed barley silage ad libitum and their individual feed intakes were recorded by 16 automated feeding bunks (GrowSafe, Airdrie, AB, Canada) for a period of 72 d to determine their phenotypic RFI. Heifers were ranked on the basis of phenotypic RFI, and 16 heifers (8 with low RFI and 8 with high RFI) were randomly selected for enteric CH and CO emissions measurement. Enteric CH and CO emissions of individual animals were measured over two 25-d periods using RC (2 d/period) and GEM systems (all days when not in chambers). During gas measurements metabolic BW tended to be greater ( ≤ 0.09) for high-RFI heifers but ADG tended ( = 0.09) to be greater for low-RFI heifers. As expected, high-RFI heifers consumed 6.9% more feed ( = 0.03) compared to their more efficient counterparts (7.1 vs. 6.6 kg DM/d). Average CH emissions were 202 and 222 g/d ( = 0.02) with the GEM system and 156 and 164 g/d ( = 0.40) with RC for the low- and high-RFI heifers, respectively. When adjusted for feed intake, CH yield (g/kg DMI) was similar for high- and low-RFI heifers (GEM: 27.7 and 28.5, = 0.25; RC: 26.5 and 26.5, = 0.99). However, CH yield differed between the 2 measurement techniques only for the high-RFI group ( = 0.01). Estimates of CO yield (g/kg DMI) also differed between the 2 techniques ( ≤ 0.03). Our study found that high- and low-efficiency cattle produce similar CH yield but different daily CH emissions. The 2 measurement techniques differ in estimating CH and CO emissions, partially because of differences in conditions (lower feed intakes of cattle while in chambers, fewer days measured in chambers) during measurement. We conclude that when intake of animals is known, the GEM system offers a robust and accurate means of estimating CH emissions from animals under field conditions.
Image analysis software as a strategy to improve the radiographic determination of fracture healing.
Duryea, Jeffrey; Evans, Christopher; Glatt, Vaida
2018-05-28
To develop and validate an unbiased, accurate, convenient and inexpensive means of determining when an osseous defect has healed and recovered sufficient strength to allow weight-bearing. A novel image processing software algorithm was created to analyze the radiographic images and produce a metric designed to reflect the bone strength. We used a rat femoral segmental defect model that provides a range of healing responses from complete union to non-union. Femora were examined by X-ray, micro-computed tomography (µCT) and mechanical testing. Accurate simulated radiographic images at different incident X-ray beam angles were produced from the µCT data files. The software-generated metric (SC) showed high levels of correlation with both the mechanical strength (τMech) and the polar moment of inertia (pMOI), with the mechanical testing data having the highest association. The optimization analysis yielded optimal oblique angles θB of 125° for τMech and 50° for pMOI. The Pearson's R values for the optimized model were 0.71 and 0.64 for τMech and pMOI, respectively. Further validation using true radiographs also demonstrated that the metric was accurate, and that the simulations were realistic. The preliminary findings suggest a very promising methodology to assess bone fracture healing using conventional radiography. With radiographs acquired at appropriate incident angles, it proved possible to calculate accurately the degree of healing and the mechanical strength of the bone. Further research is necessary to refine this approach and determine whether it translates to the human clinical setting.
Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.
Kim, Dong Sik
2016-08-01
The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .
SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Xing, L; Zhang, Q
2016-06-15
Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less
Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Bittante, G
2013-01-01
Cheese yield is an important technological trait in the dairy industry in many countries. The aim of this study was to evaluate the effectiveness of Fourier-transform infrared (FTIR) spectral analysis of fresh unprocessed milk samples for predicting cheese yield and nutrient recovery traits. A total of 1,264 model cheeses were obtained from 1,500-mL milk samples collected from individual Brown Swiss cows. Individual measurements of 7 new cheese yield-related traits were obtained from the laboratory cheese-making procedure, including the fresh cheese yield, total solid cheese yield, and the water retained in curd, all as a percentage of the processed milk, and nutrient recovery (fat, protein, total solids, and energy) in the curd as a percentage of the same nutrient contained in the milk. All individual milk samples were analyzed using a MilkoScan FT6000 over the spectral range from 5,000 to 900 wavenumber × cm(-1). Two spectral acquisitions were carried out for each sample and the results were averaged before data analysis. Different chemometric models were fitted and compared with the aim of improving the accuracy of the calibration equations for predicting these traits. The most accurate predictions were obtained for total solid cheese yield and fresh cheese yield, which exhibited coefficients of determination between the predicted and measured values in cross-validation (1-VR) of 0.95 and 0.83, respectively. A less favorable result was obtained for water retained in curd (1-VR=0.65). Promising results were obtained for recovered protein (1-VR=0.81), total solids (1-VR=0.86), and energy (1-VR=0.76), whereas recovered fat exhibited a low accuracy (1-VR=0.41). As FTIR spectroscopy is a rapid, cheap, high-throughput technique that is already used to collect standard milk recording data, these FTIR calibrations for cheese yield and nutrient recovery highlight additional potential applications of the technique in the dairy industry, especially for monitoring cheese-making processes and milk payment systems. In addition, the prediction models can be used to provide breeding organizations with information on new phenotypes for cheese yield and milk nutrient recovery, potentially allowing these traits to be enhanced through selection. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi
2014-10-01
In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.
NASA Astrophysics Data System (ADS)
Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.
2018-02-01
The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.
Prediction of beef carcass salable yield and trimmable fat using bioelectrical impedance analysis.
Zollinger, B L; Farrow, R L; Lawrence, T E; Latman, N S
2010-03-01
Bioelectrical impedance technology (BIA) is capable of providing an objective method of beef carcass yield estimation with the rapidity of yield grading. Electrical resistance (Rs), reactance (Xc), impedance (I), hot carcass weight (HCW), fat thickness between the 12th and 13th ribs (FT), estimated percentage kidney, pelvic, and heart fat (KPH%), longissimus muscle area (LMA), length between electrodes (LGE) as well as three derived carcass values that included electrical volume (EVOL), reactive density (XcD), and resistive density (RsD) were determined for the carcasses of 41 commercially fed cattle. Carcasses were subsequently fabricated into salable beef products reflective of industry standards. Equations were developed to predict percentage salable carcass yield (SY%) and percentage trimmable fat (FT%). Resulting equations accounted for 81% and 84% of variation in SY% and FT%, respectively. These results indicate that BIA technology is an accurate predictor of beef carcass composition. Copyright 2009 Elsevier Ltd. All rights reserved.
Fuller, L.M.; Morgan, T.R.; Aichele, Stephen S.
2006-01-01
The Michigan Army National Guard’s Fort Custer Training Center (FCTC) in Battle Creek, Mich., has the responsibility to protect wetland resources on the training grounds while providing training opportunities, and for future development planning at the facility. The National Wetlands Inventory (NWI) data have been the primary wetland-boundary resource, but a check on scale and accuracy of the wetland boundary information for the Fort Custer Training Center was needed. In cooperation with the FCTC, the U.S. Geological Survey (USGS) used an early spring IKONOS pan-sharpened satellite image to delineate the wetlands and create a more accurate wetland map for the FCTC. The USGS tested automated approaches (supervised and unsupervised classifications) to identify the wetland areas from the IKONOS satellite image, but the automated approaches alone did not yield accurate results. To ensure accurate wetland boundaries, the final wetland map was manually digitized on the basis of the automated supervised and unsupervised classifications, in combination with NWI data, field verifications, and visual interpretation of the IKONOS satellite image. The final wetland areas digitized from the IKONOS satellite imagery were similar to those in NWI; however, the wetland boundaries differed in some areas, a few wetlands mapped on the NWI were determined not to be wetlands from the IKONOS image and field verification, and additional previously unmapped wetlands not recognized by the NWI were identified from the IKONOS image.
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
NASA Astrophysics Data System (ADS)
Mashayekhi, Somayeh; Miles, Paul; Hussaini, M. Yousuff; Oates, William S.
2018-02-01
In this paper, fractional and non-fractional viscoelastic models for elastomeric materials are derived and analyzed in comparison to experimental results. The viscoelastic models are derived by expanding thermodynamic balance equations for both fractal and non-fractal media. The order of the fractional time derivative is shown to strongly affect the accuracy of the viscoelastic constitutive predictions. Model validation uses experimental data describing viscoelasticity of the dielectric elastomer Very High Bond (VHB) 4910. Since these materials are known for their broad applications in smart structures, it is important to characterize and accurately predict their behavior across a large range of time scales. Whereas integer order viscoelastic models can yield reasonable agreement with data, the model parameters often lack robustness in prediction at different deformation rates. Alternatively, fractional order models of viscoelasticity provide an alternative framework to more accurately quantify complex rate-dependent behavior. Prior research that has considered fractional order viscoelasticity lacks experimental validation and contains limited links between viscoelastic theory and fractional order derivatives. To address these issues, we use fractional order operators to experimentally validate fractional and non-fractional viscoelastic models in elastomeric solids using Bayesian uncertainty quantification. The fractional order model is found to be advantageous as predictions are significantly more accurate than integer order viscoelastic models for deformation rates spanning four orders of magnitude.
Finite difference elastic wave modeling with an irregular free surface using ADER scheme
NASA Astrophysics Data System (ADS)
Almuhaidib, Abdulaziz M.; Nafi Toksöz, M.
2015-06-01
In numerical modeling of seismic wave propagation in the earth, we encounter two important issues: the free surface and the topography of the surface (i.e. irregularities). In this study, we develop a 2D finite difference solver for the elastic wave equation that combines a 4th- order ADER scheme (Arbitrary high-order accuracy using DERivatives), which is widely used in aeroacoustics, with the characteristic variable method at the free surface boundary. The idea is to treat the free surface boundary explicitly by using ghost values of the solution for points beyond the free surface to impose the physical boundary condition. The method is based on the velocity-stress formulation. The ultimate goal is to develop a numerical solver for the elastic wave equation that is stable, accurate and computationally efficient. The solver treats smooth arbitrary-shaped boundaries as simple plane boundaries. The computational cost added by treating the topography is negligible compared to flat free surface because only a small number of grid points near the boundary need to be computed. In the presence of topography, using 10 grid points per shortest shear-wavelength, the solver yields accurate results. Benchmark numerical tests using several complex models that are solved by our method and other independent accurate methods show an excellent agreement, confirming the validity of the method for modeling elastic waves with an irregular free surface.
The Missing Middle in Validation Research
ERIC Educational Resources Information Center
Taylor, Erwin K.; Griess, Thomas
1976-01-01
In most selection validation research, only the upper and lower tails of the criterion distribution are used, often yielding misleading or incorrect results. Provides formulas and tables which enable the researcher to account more accurately for the distribution of criterion within the middle range of population. (Author/RW)
Irrigation scheduling: When, where, and how much?
USDA-ARS?s Scientific Manuscript database
Irrigation scheduling, a key element of proper water management, is the accurate forecasting of water application (amount and timing) for optimal crop production (yield and fruit quality). The goal is to apply the correct amount of water at the right time to minimize irrigation costs and maximize cr...